Covariance matrix estimation for stationary time series
Xiao, Han; Wu, Wei Biao
2011-01-01
We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...
Construction of covariance matrix for experimental data
International Nuclear Information System (INIS)
Liu Tingjin; Zhang Jianhua
1992-01-01
For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained
ANL Critical Assembly Covariance Matrix Generation - Addendum
Energy Technology Data Exchange (ETDEWEB)
McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-01-13
In March 2012, a report was issued on covariance matrices for Argonne National Laboratory (ANL) critical experiments. That report detailed the theory behind the calculation of covariance matrices and the methodology used to determine the matrices for a set of 33 ANL experimental set-ups. Since that time, three new experiments have been evaluated and approved. This report essentially updates the previous report by adding in these new experiments to the preceding covariance matrix structure.
Heteroscedasticity resistant robust covariance matrix estimator
Czech Academy of Sciences Publication Activity Database
Víšek, Jan Ámos
2010-01-01
Roč. 17, č. 27 (2010), s. 33-49 ISSN 1212-074X Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Covariance matrix * Heteroscedasticity * Resistant Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/SI/visek-heteroscedasticity resistant robust covariance matrix estimator.pdf
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
ACORNS, Covariance and Correlation Matrix Diagonalization
International Nuclear Information System (INIS)
Szondi, E.J.
1990-01-01
1 - Description of program or function: The program allows the user to verify the different types of covariance/correlation matrices used in the activation neutron spectrometry. 2 - Method of solution: The program performs the diagonalization of the input covariance/relative covariance/correlation matrices. The Eigen values are then analyzed to determine the rank of the matrices. If the Eigen vectors of the pertinent correlation matrix have also been calculated, the program can perform a complete factor analysis (generation of the factor matrix and its rotation in Kaiser's 'varimax' sense to select the origin of the correlations). 3 - Restrictions on the complexity of the problem: Matrix size is limited to 60 on PDP and to 100 on IBM PC/AT
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying
2015-01-01
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
The K-Step Spatial Sign Covariance Matrix
Croux, C.; Dehon, C.; Yadine, A.
2010-01-01
The Sign Covariance Matrix is an orthogonal equivariant estimator of mul- tivariate scale. It is often used as an easy-to-compute and highly robust estimator. In this paper we propose a k-step version of the Sign Covariance Matrix, which improves its e±ciency while keeping the maximal breakdown
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Position Error Covariance Matrix Validation and Correction
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Positive semidefinite integrated covariance estimation, factorizations and asynchronicity
DEFF Research Database (Denmark)
Boudt, Kris; Laurent, Sébastien; Lunde, Asger
2017-01-01
An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many...
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
The Performance Analysis Based on SAR Sample Covariance Matrix
Directory of Open Access Journals (Sweden)
Esra Erten
2012-03-01
Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.
Explicit Covariance Matrix for Particle Measurement Precision
Karimäki, Veikko
1997-01-01
We derive explicit and precise formulae for 3 by 3 error matrix of the particle transverse momentum, direction and impact parameter. The error matrix elements are expressed as functions of up to fourth order statistical moments of the measured coordinates. The formulae are valid for any curvature and track length in case of negligible multiple scattering.
Covariance, correlation matrix, and the multiscale community structure of networks.
Shen, Hua-Wei; Cheng, Xue-Qi; Fang, Bin-Xing
2010-07-01
Empirical studies show that real world networks often exhibit multiple scales of topological descriptions. However, it is still an open problem how to identify the intrinsic multiple scales of networks. In this paper, we consider detecting the multiscale community structure of network from the perspective of dimension reduction. According to this perspective, a covariance matrix of network is defined to uncover the multiscale community structure through the translation and rotation transformations. It is proved that the covariance matrix is the unbiased version of the well-known modularity matrix. We then point out that the translation and rotation transformations fail to deal with the heterogeneous network, which is very common in nature and society. To address this problem, a correlation matrix is proposed through introducing the rescaling transformation into the covariance matrix. Extensive tests on real world and artificial networks demonstrate that the correlation matrix significantly outperforms the covariance matrix, identically the modularity matrix, as regards identifying the multiscale community structure of network. This work provides a novel perspective to the identification of community structure and thus various dimension reduction methods might be used for the identification of community structure. Through introducing the correlation matrix, we further conclude that the rescaling transformation is crucial to identify the multiscale community structure of network, as well as the translation and rotation transformations.
Co-movements among financial stocks and covariance matrix analysis
Sharifi, Saba
2003-01-01
The major theories of finance leading into the main body of this research are discussed and our experiments on studying the risk and co-movements among stocks are presented. This study leads to the application of Random Matrix Theory (RMT) The idea of this theory refers to the importance of the empirically measured correlation (or covariance) matrix, C, in finance and particularly in the theory of optimal portfolios However, this matrix has recently come into question, as a large part of ...
An Empirical State Error Covariance Matrix for Batch State Estimation
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Multiple feature fusion via covariance matrix for visual tracking
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Some Algorithms for the Conditional Mean Vector and Covariance Matrix
Directory of Open Access Journals (Sweden)
John F. Monahan
2006-08-01
Full Text Available We consider here the problem of computing the mean vector and covariance matrix for a conditional normal distribution, considering especially a sequence of problems where the conditioning variables are changing. The sweep operator provides one simple general approach that is easy to implement and update. A second, more goal-oriented general method avoids explicit computation of the vector and matrix, while enabling easy evaluation of the conditional density for likelihood computation or easy generation from the conditional distribution. The covariance structure that arises from the special case of an ARMA(p, q time series can be exploited for substantial improvements in computational efficiency.
Rotational covariance and light-front current matrix elements
International Nuclear Information System (INIS)
Keister, B.D.
1994-01-01
Light-front current matrix elements for elastic scattering from hadrons with spin 1 or greater must satisfy a nontrivial constraint associated with the requirement of rotational covariance for the current operator. Using a model ρ meson as a prototype for hadronic quark models, this constraint and its implications are studied at both low and high momentum transfers. In the kinematic region appropriate for asymptotic QCD, helicity rules, together with the rotational covariance condition, yield an additional relation between the light-front current matrix elements
The covariance matrix of derived quantities and their combination
International Nuclear Information System (INIS)
Zhao, Z.; Perey, F.G.
1992-06-01
The covariance matrix of quantities derived from measured data via nonlinear relations are only approximate since they are functions of the measured data taken as estimates for the true values of the measured quantities. The evaluation of such derived quantities entails new estimates for the true values of the measured quantities and consequently implies a modification of the covariance matrix of the derived quantities that was used in the evaluation process. Failure to recognize such an implication can lead to inconsistencies between the results of different evaluation strategies. In this report we show that an iterative procedure can eliminate such inconsistencies
Bayesian tests on components of the compound symmetry covariance matrix
Mulder, J.; Fox, J.P.
2013-01-01
Complex dependency structures are often conditionally modeled, where random effects parameters are used to specify the natural heterogeneity in the population. When interest is focused on the dependency structure, inferences can be made from a complex covariance matrix using a marginal modeling
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
Positive Semidefinite Integrated Covariance Estimation, Factorizations and Asynchronicity
DEFF Research Database (Denmark)
Boudt, Kris; Laurent, Sébastien; Lunde, Asger
An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization on the correlation matrix in order to exploit the heterogeneity in trading intensity to estimate the different parameters sequentially with as many...
MIMO Radar Transmit Beampattern Design Without Synthesising the Covariance Matrix
Ahmed, Sajid
2013-10-28
Compared to phased-array, multiple-input multiple-output (MIMO) radars provide more degrees-offreedom (DOF) that can be exploited for improved spatial resolution, better parametric identifiability, lower side-lobe levels at the transmitter/receiver, and design variety of transmit beampatterns. The design of the transmit beampattern generally requires the waveforms to have arbitrary auto- and crosscorrelation properties. The generation of such waveforms is a two step complicated process. In the first step a waveform covariance matrix is synthesised, which is a constrained optimisation problem. In the second step, to realise this covariance matrix actual waveforms are designed, which is also a constrained optimisation problem. Our proposed scheme converts this two step constrained optimisation problem into a one step unconstrained optimisation problem. In the proposed scheme, in contrast to synthesising the covariance matrix for the desired beampattern, nT independent finite-alphabet constantenvelope waveforms are generated and pre-processed, with weight matrix W, before transmitting from the antennas. In this work, two weight matrices are proposed that can be easily optimised for the desired symmetric and non-symmetric beampatterns and guarantee equal average power transmission from each antenna. Simulation results validate our claims.
Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems
Charara, Ali M.
2018-05-24
Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and
ANGELO-LAMBDA, Covariance matrix interpolation and mathematical verification
International Nuclear Information System (INIS)
Kodeli, Ivo
2007-01-01
1 - Description of program or function: The codes ANGELO-2.3 and LAMBDA-2.3 are used for the interpolation of the cross section covariance data from the original to a user defined energy group structure, and for the mathematical tests of the matrices, respectively. The LAMBDA-2.3 code calculates the eigenvalues of the matrices (both for the original or the converted) and lists them accordingly into positive and negative matrices. This verification is strongly recommended before using any covariance matrices. These versions of the two codes are the extended versions of the previous codes available in the Packages NEA-1264 - ZZ-VITAMIN-J/COVA. They were specifically developed for the purposes of the OECD LWR UAM benchmark, in particular for the processing of the ZZ-SCALE5.1/COVA-44G cross section covariance matrix library retrieved from the SCALE-5.1 package. Either the original SCALE-5.1 libraries or the libraries separated into several files by Nuclides can be (in principle) processed by ANGELO/LAMBDA codes, but the use of the one-nuclide data is strongly recommended. Due to large deviations of the correlation matrix terms from unity observed in some SCALE5.1 covariance matrices, the previous more severe acceptance condition in the ANGELO2.3 code was released. In case the correlation coefficients exceed 1.0, only a warning message is issued, and coefficients are replaced by 1.0. 2 - Methods: ANGELO-2.3 interpolates the covariance matrices to a union grid using flat weighting. LAMBDA-2.3 code includes the mathematical routines to calculate the eigenvalues of the covariance matrices. 3 - Restrictions on the complexity of the problem: The algorithm used in ANGELO is relatively simple, therefore the interpolations involving energy group structure which are very different from the original (e.g. large difference in the number of energy groups) may not be accurate. In particular in the case of the MT=1018 data (fission spectra covariances) the algorithm may not be
DANTE, Activation Analysis Neutron Spectra Unfolding by Covariance Matrix Method
International Nuclear Information System (INIS)
Petilli, M.
1981-01-01
1 - Description of problem or function: The program evaluates activation measurements of reactor neutron spectra and unfolds the results for dosimetry purposes. Different evaluation options are foreseen: absolute or relative fluxes and different iteration algorithms. 2 - Method of solution: A least-square fit method is used. A correlation between available data and their uncertainties has been introduced by means of flux and activity variance-covariance matrices. Cross sections are assumed to be constant, i.e. with variance-covariance matrix equal to zero. The Lagrange multipliers method has been used for calculating the solution. 3 - Restrictions on the complexity of the problem: 9 activation experiments can be analyzed. 75 energy groups are accepted
Ultracentrifuge separative power modeling with multivariate regression using covariance matrix
International Nuclear Information System (INIS)
Migliavacca, Elder
2004-01-01
In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 460 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process variables, which significantly influence the δU values are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow rate F, cut θ and product line pressure P p . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heteroscedasticity with any explained regression model variable. The surface curves are made relating the separative power with the control variables F, θ and P p to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.
Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham
2017-11-01
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
A New Heteroskedastic Consistent Covariance Matrix Estimator using Deviance Measure
Directory of Open Access Journals (Sweden)
Nuzhat Aftab
2016-06-01
Full Text Available In this article we propose a new heteroskedastic consistent covariance matrix estimator, HC6, based on deviance measure. We have studied and compared the finite sample behavior of the new test and compared it with other this kind of estimators, HC1, HC3 and HC4m, which are used in case of leverage observations. Simulation study is conducted to study the effect of various levels of heteroskedasticity on the size and power of quasi-t test with HC estimators. Results show that the test statistic based on our new suggested estimator has better asymptotic approximation and less size distortion as compared to other estimators for small sample sizes when high level ofheteroskedasticity is present in data.
Robust adaptive multichannel SAR processing based on covariance matrix reconstruction
Tan, Zhen-ya; He, Feng
2018-04-01
With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.
Some remarks on estimating a covariance structure model from a sample correlation matrix
Maydeu Olivares, Alberto; Hernández Estrada, Adolfo
2000-01-01
A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals....... The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise co- variance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared...
Estimation of covariance matrix on the experimental data for nuclear data evaluation
International Nuclear Information System (INIS)
Murata, T.
1985-01-01
In order to evaluate fission and capture cross sections of some U and Pu isotopes for JENDL-3, we have a plan for evaluating them simultaneously with a least-squares method. For the simultaneous evaluation, the covariance matrix is required for each experimental data set. In the present work, we have studied the procedures for deriving the covariance matrix from the error data given in the experimental papers. The covariance matrices were obtained using the partial errors and estimated correlation coefficients between the same type partial errors for different neutron energy. Some examples of the covariance matrix estimation are explained and the preliminary results of the simultaneous evaluation are presented. (author)
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
On the regularity of the covariance matrix of a discretized scalar field on the sphere
Energy Technology Data Exchange (ETDEWEB)
Bilbao-Ahedo, J.D. [Departamento de Física Moderna, Universidad de Cantabria, Av. los Castros s/n, 39005 Santander (Spain); Barreiro, R.B.; Herranz, D.; Vielva, P.; Martínez-González, E., E-mail: bilbao@ifca.unican.es, E-mail: barreiro@ifca.unican.es, E-mail: herranz@ifca.unican.es, E-mail: vielva@ifca.unican.es, E-mail: martinez@ifca.unican.es [Instituto de Física de Cantabria (CSIC-UC), Av. los Castros s/n, 39005 Santander (Spain)
2017-02-01
We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizations that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-01-01
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Undesirable effects of covariance matrix techniques for error analysis
International Nuclear Information System (INIS)
Seibert, D.
1994-01-01
Regression with χ 2 constructed from covariance matrices should not be used for some combinations of covariance matrices and fitting functions. Using the technique for unsuitable combinations can amplify systematic errors. This amplification is uncontrolled, and can produce arbitrarily inaccurate results that might not be ruled out by a χ 2 test. In addition, this technique can give incorrect (artificially small) errors for fit parameters. I give a test for this instability and a more robust (but computationally more intensive) method for fitting correlated data
DEFF Research Database (Denmark)
Yang, Yukay
I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...... to consider multivariate volatility modelling....
ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Keyes, David E.
2016-01-01
In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)
On the mean squared error of the ridge estimator of the covariance and precision matrix
van Wieringen, Wessel N.
2017-01-01
For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.
ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters
Litvinenko, Alexander
2016-10-25
In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)
Construction of covariance matrix for absolute fission yield data measurement
International Nuclear Information System (INIS)
Liu Tingjin; Sun Zhengjun
1999-01-01
The purpose is to provide a tool for experimenters and evaluators to conveniently construct the covariance based on the information of the experiment. The method used is so called as parameter analysis one. The basic method and formula are given in the first section, a practical program is introduced in the second section, and finally, some examples are given in the third section
Construction and use of gene expression covariation matrix
Directory of Open Access Journals (Sweden)
Bellis Michel
2009-07-01
Full Text Available Abstract Background One essential step in the massive analysis of transcriptomic profiles is the calculation of the correlation coefficient, a value used to select pairs of genes with similar or inverse transcriptional profiles across a large fraction of the biological conditions examined. Until now, the choice between the two available methods for calculating the coefficient has been dictated mainly by technological considerations. Specifically, in analyses based on double-channel techniques, researchers have been required to use covariation correlation, i.e. the correlation between gene expression changes measured between several pairs of biological conditions, expressed for example as fold-change. In contrast, in analyses of single-channel techniques scientists have been restricted to the use of coexpression correlation, i.e. correlation between gene expression levels. To our knowledge, nobody has ever examined the possible benefits of using covariation instead of coexpression in massive analyses of single channel microarray results. Results We describe here how single-channel techniques can be treated like double-channel techniques and used to generate both gene expression changes and covariation measures. We also present a new method that allows the calculation of both positive and negative correlation coefficients between genes. First, we perform systematic comparisons between two given biological conditions and classify, for each comparison, genes as increased (I, decreased (D, or not changed (N. As a result, the original series of n gene expression level measures assigned to each gene is replaced by an ordered string of n(n-1/2 symbols, e.g. IDDNNIDID....DNNNNNNID, with the length of the string corresponding to the number of comparisons. In a second step, positive and negative covariation matrices (CVM are constructed by calculating statistically significant positive or negative correlation scores for any pair of genes by comparing their
Evaluating dynamic covariance matrix forecasting and portfolio optimization
Sendstad, Lars Hegnes; Holten, Dag Martin
2012-01-01
In this thesis we have evaluated the covariance forecasting ability of the simple moving average, the exponential moving average and the dynamic conditional correlation models. Overall we found that a dynamic portfolio can gain significant improvements by implementing a multivariate GARCH forecast. We further divided the global investment universe into sectors and regions in order to investigate the relative portfolio performance of several asset allocation strategies with both variance and c...
Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems
Charara, Ali
2018-01-01
the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and reduce the overhead of the API
The count variance-covariance matrix in a critical reactor
International Nuclear Information System (INIS)
Carloni, F.; Giovannini, R.
1984-01-01
The present paper deals with a critical reactor containing a set of neutron detectors operating one at time in different time intervals. The analysis makes use of the Kolmogorov backward formalism for the branching processes, in the framework of the one-velocity, point reactor model, explicitly taking into account the six groups of delayed neutrons. The expression of the mean value, the covariance of the counting distribution are reported. The list of the Fortran 4. subroutine CRITIC which computes these moments is also reported
Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series
DEFF Research Database (Denmark)
Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier
2016-01-01
In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...
Ex-post evaluation of tax legislation in the Netherlands
S.J.C. Hemels (Sigrid)
2011-01-01
textabstractIntroduction Since the end of the 20th century, ex-post evaluation of tax legislation has consistently been part of the agenda of the Dutch government. In 2005, the 2001 Income tax Act was evaluated. In addition, several tax expenditures are evaluated each year. Tax expenditures can be a
R-matrix and q-covariant oscillators for Uq(sl(n|m))
International Nuclear Information System (INIS)
Leblanc, Y.; Wallet, J.C.
1993-02-01
An R-matrix formalism is used to construct covariant quantum oscillator algebras for U q (sl(n|m)). It is shown that the complete structure of the twisted oscillator algebras can be obtained from the properties of the intertwining matrix obeying a Hecke type relation, combined with covariance of the oscillators at the deformed level and associativity. The resulting twisted algebras, involving q-bosons and q-fermions, are invariant under natural q-transformations of the oscillators induced by the coproduct. (author) 11 refs
Approaches for the generation of a covariance matrix for the Cf-252 fission-neutron spectrum
International Nuclear Information System (INIS)
Mannhart, W.
1983-01-01
After a brief retrospective glance is cast at the situation, the evaluation of the Cf-252 neutron spectrum with a complete covariance matrix based on the results of integral experiments is proposed. The different steps already taken in such an evaluation and work in progress are reviewed. It is shown that special attention should be given to the normalization of the neutron spectrum which must be reflected in the covariance matrix. The result of the least-squares adjustment procedure applied can easily be combined with the results of direct spectrum measurements and should be regarded as the first step in a new evaluation of the Cf-252 fission-neutron spectrum. (author)
On the use of the covariance matrix to fit correlated data
D'Agostini, G.
1994-07-01
Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.
Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection
Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd
2015-02-01
Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.
A note on the eigensystem of the covariance matrix of dichotomous Guttman items
Directory of Open Access Journals (Sweden)
Clintin P Davis-Stober
2015-12-01
Full Text Available We consider the sample covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987 for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950.
A Note on the Eigensystem of the Covariance Matrix of Dichotomous Guttman Items.
Davis-Stober, Clintin P; Doignon, Jean-Paul; Suck, Reinhard
2015-01-01
We consider the covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987) for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950).
Litvinenko, Alexander
2017-09-26
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Litvinenko, Alexander
2017-09-24
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Ex-post evaluations of demand forecast accuracy
DEFF Research Database (Denmark)
Nicolaisen, Morten Skou; Driscoll, Patrick Arthur
2014-01-01
Travel demand forecasts play a crucial role in the preparation of decision support to policy makers in the field of transport planning. The results feed directly into impact appraisals such as cost benefit analyses and environmental impact assessments, which are mandatory for large public works...... projects in many countries. Over the last couple of decades there has been an increasing attention to the lack of demand forecast accuracy, but since data availability for comprehensive ex- post appraisals is problematic, such studies are still relatively rare. The present paper presents a review...... of the largest ex-post studies of demand forecast accuracy for transport infrastructure projects. The focus is twofold; to provide an overview of observed levels of demand forecast inaccuracy and to explore the primary explanations offered for the observed inaccuracy. Inaccuracy in the form of both bias...
Kenya; Ex Post Assessment of Longer-Term Program Engagement
International Monetary Fund
2008-01-01
This paper discusses key findings of the Ex Post Assessment (EPA) of Longer-Term Program Engagement paper for Kenya. This EPA focuses on 1993–2007, when Kenya was engaged in four successive IMF arrangements. Macroeconomic policy design was broadly appropriate, and implementation was generally sound. Growth slowed in the 1990s, but picked up after the 2002 elections, reflecting buoyant global conditions, structural reforms, and a surge of private capital inflows. Monetary policies were complic...
Litvinenko, Alexander
2017-01-01
matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p
International Nuclear Information System (INIS)
Sato, Masanori; Matsubara, Takahiko; Takada, Masahiro; Hamana, Takashi
2011-01-01
Using 1000 ray-tracing simulations for a Λ-dominated cold dark model in Sato et al., we study the covariance matrix of cosmic shear correlation functions, which is the standard statistics used in previous measurements. The shear correlation function of a particular separation angle is affected by Fourier modes over a wide range of multipoles, even beyond a survey area, which complicates the analysis of the covariance matrix. To overcome such obstacles we first construct Gaussian shear simulations from the 1000 realizations and then use the Gaussian simulations to disentangle the Gaussian covariance contribution to the covariance matrix we measured from the original simulations. We found that an analytical formula of Gaussian covariance overestimates the covariance amplitudes due to an effect of the finite survey area. Furthermore, the clean separation of the Gaussian covariance allows us to examine the non-Gaussian covariance contributions as a function of separation angles and source redshifts. For upcoming surveys with typical source redshifts of z s = 0.6 and 1.0, the non-Gaussian contribution to the diagonal covariance components at 1 arcmin scales is greater than the Gaussian contribution by a factor of 20 and 10, respectively. Predictions based on the halo model qualitatively well reproduce the simulation results, however show a sizable disagreement in the covariance amplitudes. By combining these simulation results we develop a fitting formula to the covariance matrix for a survey with arbitrary area coverage, taking into account effects of the finiteness of survey area on the Gaussian covariance.
Directory of Open Access Journals (Sweden)
K. Karthikeyan
2012-10-01
Full Text Available This paper describes the application of an evolutionary algorithm, Restart Covariance Matrix Adaptation Evolution Strategy (RCMA-ES to the Generation Expansion Planning (GEP problem. RCMA-ES is a class of continuous Evolutionary Algorithm (EA derived from the concept of self-adaptation in evolution strategies, which adapts the covariance matrix of a multivariate normal search distribution. The original GEP problem is modified by incorporating Virtual Mapping Procedure (VMP. The GEP problem of a synthetic test systems for 6-year, 14-year and 24-year planning horizons having five types of candidate units is considered. Two different constraint-handling methods are incorporated and impact of each method has been compared. In addition, comparison and validation has also made with dynamic programming method.
Yoneoka, Daisuke; Henmi, Masayuki
2017-06-01
Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Mannhart, W.
1986-01-01
Based on the responses of 25 different neutron activation detectors, the neutron spectrum of Cf-252 hs been adjusted with least-squares methods. For a fixed input neutron spectrum, the covariance matrix of this spectrum has been systematically varied to investigate the influence of this matrix on the final result. The investigation showed that the adjusted neutron spectrum is rather sensitive to the structure of the covariance matrix for the input spectrum. (author)
Cloud-Based DDoS HTTP Attack Detection Using Covariance Matrix Approach
Directory of Open Access Journals (Sweden)
Abdulaziz Aborujilah
2017-01-01
Full Text Available In this era of technology, cloud computing technology has become essential part of the IT services used the daily life. In this regard, website hosting services are gradually moving to the cloud. This adds new valued feature to the cloud-based websites and at the same time introduces new threats for such services. DDoS attack is one such serious threat. Covariance matrix approach is used in this article to detect such attacks. The results were encouraging, according to confusion matrix and ROC descriptors.
Covariant field equations, gauge fields and conservation laws from Yang-Mills matrix models
International Nuclear Information System (INIS)
Steinacker, Harold
2009-01-01
The effective geometry and the gravitational coupling of nonabelian gauge and scalar fields on generic NC branes in Yang-Mills matrix models is determined. Covariant field equations are derived from the basic matrix equations of motions, known as Yang-Mills algebra. Remarkably, the equations of motion for the Poisson structure and for the nonabelian gauge fields follow from a matrix Noether theorem, and are therefore protected from quantum corrections. This provides a transparent derivation and generalization of the effective action governing the SU(n) gauge fields obtained in [1], including the would-be topological term. In particular, the IKKT matrix model is capable of describing 4-dimensional NC space-times with a general effective metric. Metric deformations of flat Moyal-Weyl space are briefly discussed.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Filipiak, Katarzyna; Klein, Daniel; Roy, Anuradha
2017-01-01
The problem of testing the separability of a covariance matrix against an unstructured variance-covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first-order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ 2 distribution. The tests are implemented on a real dataset from medical studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Concise Method for Storing and Communicating the Data Covariance Matrix
Energy Technology Data Exchange (ETDEWEB)
Larson, Nancy M [ORNL
2008-10-01
The covariance matrix associated with experimental cross section or transmission data consists of several components. Statistical uncertainties on the measured quantity (counts) provide a diagonal contribution. Off-diagonal components arise from uncertainties on the parameters (such as normalization or background) that figure into the data reduction process; these are denoted systematic or common uncertainties, since they affect all data points. The full off-diagonal data covariance matrix (DCM) can be extremely large, since the size is the square of the number of data points. Fortunately, it is not necessary to explicitly calculate, store, or invert the DCM. Likewise, it is not necessary to explicitly calculate, store, or use the inverse of the DCM. Instead, it is more efficient to accomplish the same results using only the various component matrices that appear in the definition of the DCM. Those component matrices are either diagonal or small (the number of data points times the number of data-reduction parameters); hence, this implicit data covariance method requires far less array storage and far fewer computations while producing more accurate results.
Covariance Matrix of a Double-Differential Doppler-Broadened Elastic Scattering Cross Section
Arbanas, G.; Becker, B.; Dagan, R.; Dunn, M. E.; Larson, N. M.; Leal, L. C.; Williams, M. L.
2012-05-01
Legendre moments of a double-differential Doppler-broadened elastic neutron scattering cross section on 238U are computed near the 6.67 eV resonance at temperature T = 103 K up to angular order 14. A covariance matrix of these Legendre moments is computed as a functional of the covariance matrix of the elastic scattering cross section. A variance of double-differential Doppler-broadened elastic scattering cross section is computed from the covariance of Legendre moments. Notice: This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Enforcing Margin Squeeze Ex Post Across Converging Telecommunications Markets
DEFF Research Database (Denmark)
Bergqvist, Christian; Townsend, John
, delay profitability or limit their ability to remain or expand on markets. However, traditional market definitions are being challenged by (1) the technological convergence of services and (2) innovative product offerings taking advantage of this convergence. Consumers now routinely purchase a bundle...... and innovation present both theoretical and practical difficulties for assessing “muddled margins” on telecoms markets. New and different enforcement approaches to exclusion will have to be formulated within the Article 102 framework and tested in the Courts. This may even require abstaining from applying...... Article 102 TFEU during material periods of convergence, and confining ex post enforcement activity to sector regulation, even when this is inferior for safeguarding effective competition....
Directory of Open Access Journals (Sweden)
Changgan SHU
2014-09-01
Full Text Available In the standard root multiple signal classification algorithm, the performance of direction of arrival estimation will reduce and even lose effect in circumstances that a low signal noise ratio and a small signals interval. By reconstructing and weighting the covariance matrix of received signal, the modified algorithm can provide more accurate estimation results. The computer simulation and performance analysis are given next, which show that under the condition of lower signal noise ratio and stronger correlation between signals, the proposed modified algorithm could provide preferable azimuth estimating performance than the standard method.
Robust Markowitz mean-variance portfolio selection under ambiguous covariance matrix *
Ismail, Amine; Pham, Huyên
2016-01-01
This paper studies a robust continuous-time Markowitz portfolio selection pro\\-blem where the model uncertainty carries on the covariance matrix of multiple risky assets. This problem is formulated into a min-max mean-variance problem over a set of non-dominated probability measures that is solved by a McKean-Vlasov dynamic programming approach, which allows us to characterize the solution in terms of a Bellman-Isaacs equation in the Wasserstein space of probability measures. We provide expli...
Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.
Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A
2011-04-01
Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.
Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang
2018-05-08
When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.
Directory of Open Access Journals (Sweden)
Clarence C. Y. Kwan
2010-07-01
Full Text Available This study considers, from a pedagogic perspective, a crucial requirement for the covariance matrix of security returns in mean-variance portfolio analysis. Although the requirement that the covariance matrix be positive definite is fundamental in modern finance, it has not received any attention in standard investment textbooks. Being unaware of the requirement could cause confusion for students over some strange portfolio results that are based on seemingly reasonable input parameters. This study considers the requirement both informally and analytically. Electronic spreadsheet tools for constrained optimization and basic matrix operations are utilized to illustrate the various concepts involved.
Forum shopping for ex-post gas-balancing services
International Nuclear Information System (INIS)
Keyaerts, Nico; D'haeseleer, William
2014-01-01
The patchwork of different imbalance-settlement rules in geographically adjacent gas regions induces shippers to go “forum shopping” to minimize costs of ex-post balancing services. This shopping increases efficiency, and thus welfare of the shippers, on the one hand. The impact on net efficiency is dependent on the relative incentives provided by different balancing mechanisms and the relative system-balancing costs that the transmission-system operators face to offer balancing services to unbalanced shippers, on the other hand. If the gas-balancing mechanism and the system-balancing costs are aligned, net efficiency in the combined gas system will rise. Our results demonstrate that such an outcome is not guaranteed. Hence, market integration without properly checking compatibility of balancing rules can improve shipper efficiency at the cost of reducing overall efficiency. The latter outcome should clearly be avoided by policy makers and European regulators whose primary concern should be overall efficiency as this provides fair and efficient prices for gas consumers and a higher utility for society. - Highlights: • Transnational gas-shipper activity increases shipper profit. • Balancing rules in multi-region gas markets are not always compatible. • Regional TSOs balance less efficiently if wrong incentives are provided. • Net efficiency is dependent on cost reflection of relative imbalance tariffs
Ex Post Regulation as the Method of Sectoral Regulation in Electricity Sector
Directory of Open Access Journals (Sweden)
Rafał Nagaj
2017-10-01
Full Text Available Aim/purpose - The aim of the article is to present the essence of ex post approach to sectoral regulation, to show the advantages and disadvantages of ex post regulation and to answer the question whether it is worth using in the electricity sector. Design/methodology/approach - For this purpose, a critical analysis of expert literature was made and an empirical analysis of countries that have applied ex post regulation in the electricity sector in the European Union. Two research methods were used: a case study and a comparison of changes in price and quality of services. The research period covered the period 2000-2016. Findings - It was found that ex post regulation reduces regulatory costs, does not adversely affect the quality of service and long-term rates, gives businesses the freedom of decision-making and the ability to react quickly to changes in the economy. The main disadvantages of ex post regulation are the tendency for companies to over-estimate bills for consumers, the difficulty of pursuing claims by consumers and the need to shift regulatory risk to consumers. Research implications/limitations - In the paper there was identified a research gap, i.e. the effects of ex post regulation in the electricity sector in European Union countries where such regulation was applied. Identifying the research gap will help us understand what are the advantages and disadvantages of ex post regulation and will create a model for when it is good moment to implement this in the economy. Besides identifying the research gap, further studies will be required over ex post regulation. Originality/value/contribution - The additional value of the paper is the study of ex post regulation, its essence and types. The article analyzed the effects of ex post regulation in the electricity sector and provided valuable insights into the potential risks associated with this approach to economic regulation.
International Nuclear Information System (INIS)
Geraldo, L.P.; Smith, D.L.
1989-01-01
The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt
The covariance matrix of the Potts model: A random cluster analysis
International Nuclear Information System (INIS)
Borgs, C.; Chayes, J.T.
1996-01-01
We consider the covariance matrix, G mn = q 2 x ,m); δ(σ y ,n)>, of the d-dimensional q-states Potts model, rewriting it in the random cluster representation of Fortuin and Kasteleyn. In many of the q ordered phases, we identify the eigenvalues of this matrix both in terms of representations of the unbroken symmetry group of the model and in terms of random cluster connectivities and covariances, thereby attributing algebraic significance to these stochastic geometric quantities. We also show that the correlation length and the correlation length corresponding to the decay rate of one on the eigenvalues in the same as the inverse decay rate of the diameter of finite clusters. For dimension of d=2, we show that this correlation length and the correlation length of two-point function with free boundary conditions at the corresponding dual temperature are equal up to a factor of two. For systems with first-order transitions, this relation helps to resolve certain inconsistencies between recent exact and numerical work on correlation lengths at the self-dual point β o . For systems with second order transitions, this relation implies the equality of the correlation length exponents from above below threshold, as well as an amplitude ratio of two. In the course of proving the above results, we establish several properties of independent interest, including left continuity of the inverse correlation length with free boundary conditions and upper semicontinuity of the decay rate for finite clusters in all dimensions, and left continuity of the two-dimensional free boundary condition percolation probability at β o . We also introduce DLR equations for the random cluster model and use them to establish ergodicity of the free measure. In order to prove these results, we introduce a new class of events which we call decoupling events and two inequalities for these events
Ahmed, Sajid
2016-11-24
Various examples of methods and systems are provided for direct closed-form finite alphabet constant-envelope waveforms for planar array beampatterns. In one example, a method includes defining a waveform covariance matrix based at least in part upon a two-dimensional fast Fourier transform (2D-FFT) analysis of a frequency domain matrix Hf associated with a planar array of antennas. Symbols can be encoded based upon the waveform covariance matrix and the encoded symbols can be transmitted via the planar array of antennas. In another embodiment, a system comprises an N x M planar array of antennas and transmission circuitry configured to transmit symbols via a two-dimensional waveform beampattern defined based at least in part upon a 2D-FFT analysis of a frequency domain matrix Hf associated with the planar array of antennas.
Bouchoucha, Taha
2017-01-23
In multiple-input multiple-out (MIMO) radar, for desired transmit beampatterns, appropriate correlated waveforms are designed. To design such waveforms, conventional MIMO radar methods use two steps. In the first step, the waveforms covariance matrix, R, is synthesized to achieve the desired beampattern. While in the second step, to realize the synthesized covariance matrix, actual waveforms are designed. Most of the existing methods use iterative algorithms to solve these constrained optimization problems. The computational complexity of these algorithms is very high, which makes them difficult to use in practice. In this paper, to achieve the desired beampattern, a low complexity discrete-Fourier-transform based closed-form covariance matrix design technique is introduced for a MIMO radar. The designed covariance matrix is then exploited to derive a novel closed-form algorithm to directly design the finite-alphabet constant-envelope waveforms for the desired beampattern. The proposed technique can be used to design waveforms for large antenna array to change the beampattern in real time. It is also shown that the number of transmitted symbols from each antenna depends on the beampattern and is less than the total number of transmit antenna elements.
Miller, D.C.M.; Poos, J.J.
2009-01-01
This report describes the first part of an ex post and ex ante evaluation of the long term management plan for sole and plaice in the North Sea as laid out in Council Regulation (EC) No 676/2007. This plan has been in place since 2007. The plan aims to ensure, in its first stage, that the stocks of
Directory of Open Access Journals (Sweden)
Yasuhiro Nakamura
2012-07-01
Full Text Available The present study introduces the four-component scattering power decomposition (4-CSPD algorithm with rotation of covariance matrix, and presents an experimental proof of the equivalence between the 4-CSPD algorithms based on rotation of covariance matrix and coherency matrix. From a theoretical point of view, the 4-CSPD algorithms with rotation of the two matrices are identical. Although it seems obvious, no experimental evidence has yet been presented. In this paper, using polarimetric synthetic aperture radar (POLSAR data acquired by Phased Array L-band SAR (PALSAR on board of Advanced Land Observing Satellite (ALOS, an experimental proof is presented to show that both algorithms indeed produce identical results.
The covariance matrix of neutron spectra used in the REAL 84 exercise
International Nuclear Information System (INIS)
Matzke, M.
1986-08-01
Covariance matrices of continuous functions are discussed. It is pointed out that the number of non-vanishing eigenvalues corresponds to the number of random variables (parameters) involved in the construction of the continuous functions. The covariance matrices used in the REAL 84 international intercomparison of unfolding methods of neutron spectra are investigated. It is shown that a small rank of these covariance matrices leads to a restriction of the possible solution spectra. (orig.) [de
Directory of Open Access Journals (Sweden)
Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
Threat Object Detection using Covariance Matrix Modeling in X-ray Images
International Nuclear Information System (INIS)
Jeon, Byoun Gil; Kim, Jong Yul; Moon, Myung Kook
2016-01-01
The X-ray imaging system for the aviation security is one of the applications. In airports, all passengers and properties should be inspected and accepted by security machines before boarding on aircrafts to avoid all treat factors. That treat factors might be directly connected on terrorist threats awfully hazardous to not only passengers but also people in highly populated area such as major cities or buildings. Because the performance of the system is increasing along with the growth of IT technology, information that has various type and good quality can be provided for security check. However, human factors are mainly affected on the inspections. It means that human inspectors should be proficient corresponding to the growth of technology for efficient and effective inspection but there is clear limit of proficiency. Human being is not a computer. Because of the limitation, the aviation security techniques have the tendencies to provide not only numerous and nice information but also effective assistance for security inspectors. Many image processing applications already have been developed to provide efficient assistance for the security systems. Naturally, the security check procedure should not be altered by automatic software because it's not guaranteed that the automatic system will never make any mistake. This paper addressed an application of threat object detection using the covariance matrix modeling. The algorithm is implemented in MATLAB environment and evaluated the performance by comparing with other detection algorithms. Considering the shape of an object on an image is changed by the attitude of that to the imaging machine, the implemented detector has the robustness for rotation and scale of an object
Threat Object Detection using Covariance Matrix Modeling in X-ray Images
Energy Technology Data Exchange (ETDEWEB)
Jeon, Byoun Gil; Kim, Jong Yul; Moon, Myung Kook [KAERI, Daejeon (Korea, Republic of)
2016-05-15
The X-ray imaging system for the aviation security is one of the applications. In airports, all passengers and properties should be inspected and accepted by security machines before boarding on aircrafts to avoid all treat factors. That treat factors might be directly connected on terrorist threats awfully hazardous to not only passengers but also people in highly populated area such as major cities or buildings. Because the performance of the system is increasing along with the growth of IT technology, information that has various type and good quality can be provided for security check. However, human factors are mainly affected on the inspections. It means that human inspectors should be proficient corresponding to the growth of technology for efficient and effective inspection but there is clear limit of proficiency. Human being is not a computer. Because of the limitation, the aviation security techniques have the tendencies to provide not only numerous and nice information but also effective assistance for security inspectors. Many image processing applications already have been developed to provide efficient assistance for the security systems. Naturally, the security check procedure should not be altered by automatic software because it's not guaranteed that the automatic system will never make any mistake. This paper addressed an application of threat object detection using the covariance matrix modeling. The algorithm is implemented in MATLAB environment and evaluated the performance by comparing with other detection algorithms. Considering the shape of an object on an image is changed by the attitude of that to the imaging machine, the implemented detector has the robustness for rotation and scale of an object.
How to enhance the future use of energy policy simulation models through ex post validation
International Nuclear Information System (INIS)
Qudrat-Ullah, Hassan
2017-01-01
Although simulation and modeling in general and system dynamics models in particular has long served the energy policy domain, ex post validation of these energy policy models is rarely addressed. In fact, ex post validation is a valuable area of research because it offers modelers a chance to enhance the future use of their simulation models by validating them against the field data. This paper contributes by presenting (i) a system dynamics simulation model, which was developed and used to do a three dimensional, socio-economical and environmental long-term assessment of Pakistan's energy policy in 1999, (ii) a systematic analysis of the 15-years old predictive scenarios produced by a system dynamics simulation model through ex post validation. How did the model predictions compare with the actual data? We report that the ongoing crisis of the electricity sector of Pakistan is unfolding, as the model-based scenarios had projected. - Highlights: • Argues that increased use of energy policy models is dependent on their credibility validation. • An ex post validation process is presented as a solution to build confidence in models. • A unique system dynamics model, MDESRAP, is presented. • The root mean square percentage error and Thiel's inequality statistics are applied. • The dynamic model, MDESRAP, is presented as an ex ante and ex post validated model.
Congedo, Marco; Barachant, Alexandre
2015-01-01
Currently the Riemannian geometry of symmetric positive definite (SPD) matrices is gaining momentum as a powerful tool in a wide range of engineering applications such as image, radar and biomedical data signal processing. If the data is not natively represented in the form of SPD matrices, typically we may summarize them in such form by estimating covariance matrices of the data. However once we manipulate such covariance matrices on the Riemannian manifold we lose the representation in the original data space. For instance, we can evaluate the geometric mean of a set of covariance matrices, but not the geometric mean of the data generating the covariance matrices, the space of interest in which the geometric mean can be interpreted. As a consequence, Riemannian information geometry is often perceived by non-experts as a "black-box" tool and this perception prevents a wider adoption in the scientific community. Hereby we show that we can overcome this limitation by constructing a special form of SPD matrix embedding both the covariance structure of the data and the data itself. Incidentally, whenever the original data can be represented in the form of a generic data matrix (not even square), this special SPD matrix enables an exhaustive and unique description of the data up to second-order statistics. This is achieved embedding the covariance structure of both the rows and columns of the data matrix, allowing naturally a wide range of possible applications and bringing us over and above just an interpretability issue. We demonstrate the method by manipulating satellite images (pansharpening) and event-related potentials (ERPs) of an electroencephalography brain-computer interface (BCI) study. The first example illustrates the effect of moving along geodesics in the original data space and the second provides a novel estimation of ERP average (geometric mean), showing that, in contrast to the usual arithmetic mean, this estimation is robust to outliers. In
The application of sparse estimation of covariance matrix to quadratic discriminant analysis
Sun, Jiehuan; Zhao, Hongyu
2015-01-01
Background Although Linear Discriminant Analysis (LDA) is commonly used for classification, it may not be directly applied in genomics studies due to the large p, small n problem in these studies. Different versions of sparse LDA have been proposed to address this significant challenge. One implicit assumption of various LDA-based methods is that the covariance matrices are the same across different classes. However, rewiring of genetic networks (therefore different covariance matrices) acros...
Tests of Ex Ante Versus Ex Post Theories of Collateral Using Private and Public Information
Berger, A.N.; Frame, W.S.; Ioannidou, V.
2010-01-01
Collateral is a widely used, but not well understood, debt contracting feature. Two broad strands of theoretical literature explain collateral as arising from the existence of either ex ante private information or ex post incentive problems between borrowers and lenders. However, the extant
Tests of ex ante ex post theories of collateral using private and public information
Berger, A.N.; Frame, W.S.; Ioannidou, V.
2011-01-01
Collateral is a widely used, but not well understood, debt contracting feature. Two broad strands of theoretical literature explain collateral as arising from the existence of either ex ante private information or ex post incentive problems between borrowers and lenders. However, the extant
The ex post use of formal contracts in high-tech alliances. A contingency perspective
Jong, G. de; Klein Woolthuis, R.J.A.
2010-01-01
In this study we investigate key contingencies that determine the active use of a formal contract after the strategic alliance has started. The antecedents for this ex post contract use address the contracting process, the need to safeguard spill-over risks and the existence of trust. The model is
International Nuclear Information System (INIS)
Smith, D.L.
1987-01-01
A method is described for generating the covariance matrix of a set of experimental nuclear data which has been collapsed in size by the averaging of equivalent data points belonging to a larger parent data set. It is assumed that the data values and covariance matrix for the parent set are provided. The collapsed set is obtained by a proper weighted-averaging procedure based on the method of least squares. It is then shown by means of the law of error propagation that the elements of the covariance matrix for the collapsed set are linear combinations of elements from the parent set covariance matrix. The coefficients appearing in these combinations are binary products of the same coefficients which appear as weighting factors in the data collapsing procedure. As an example, the procedure is applied to a collection of recently-measured integral neutron-fission cross-section ratios. (orig.)
Few group collapsing of covariance matrix data based on a conservation principle
International Nuclear Information System (INIS)
Hiruta, H.; Palmiotti, G.; Salvatores, M.; Arcilla, R. Jr.; Oblozinsky, P.; McKnight, R.D.
2008-01-01
A new algorithm for a rigorous collapsing of covariance data is proposed, derived, implemented, and tested. The method is based on a conservation principle that allows preserving at a broad energy group structure the uncertainty calculated in a fine group energy structure for a specific integral parameter, using as weights the associated sensitivity coefficients
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
CSIR Research Space (South Africa)
Salmon
2013-06-01
Full Text Available stream_source_info Salmon_10577_2013.pdf.txt stream_content_type text/plain stream_size 1183 Content-Encoding ISO-8859-1 stream_name Salmon_10577_2013.pdf.txt Content-Type text/plain; charset=ISO-8859-1 IEEE Journal... of Selected Topics in Applied Earth Observations and Remote Sensing, vol, 6(3): 1079- 1085 Land cover change detection using the internal covariance matrix of the extended kalman filter over multiple spectral bands Salmon BP Kleynhans W Van den Bergh...
The application of sparse estimation of covariance matrix to quadratic discriminant analysis.
Sun, Jiehuan; Zhao, Hongyu
2015-02-18
Although Linear Discriminant Analysis (LDA) is commonly used for classification, it may not be directly applied in genomics studies due to the large p, small n problem in these studies. Different versions of sparse LDA have been proposed to address this significant challenge. One implicit assumption of various LDA-based methods is that the covariance matrices are the same across different classes. However, rewiring of genetic networks (therefore different covariance matrices) across different diseases has been observed in many genomics studies, which suggests that LDA and its variations may be suboptimal for disease classifications. However, it is not clear whether considering differing genetic networks across diseases can improve classification in genomics studies. We propose a sparse version of Quadratic Discriminant Analysis (SQDA) to explicitly consider the differences of the genetic networks across diseases. Both simulation and real data analysis are performed to compare the performance of SQDA with six commonly used classification methods. SQDA provides more accurate classification results than other methods for both simulated and real data. Our method should prove useful for classification in genomics studies and other research settings, where covariances differ among classes.
Careau, Vincent; Wolak, Matthew E.; Carter, Patrick A.; Garland, Theodore
2015-01-01
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance–covariance matrix (G). Yet knowledge of G in a population experiencing new or altered selection is not sufficient to predict selection response because G itself evolves in ways that are poorly understood. We experimentally evaluated changes in G when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change. PMID:26582016
Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore
2015-11-22
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance-covariance matrix ( G: ). Yet knowledge of G: in a population experiencing new or altered selection is not sufficient to predict selection response because G: itself evolves in ways that are poorly understood. We experimentally evaluated changes in G: when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G: induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G: induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G: and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change. © 2015 The Author(s).
“It Was Raining All the Time!”: Ex Post Tourist Weather Perceptions
Directory of Open Access Journals (Sweden)
Stefan Gössling
2016-01-01
Full Text Available The importance of weather for tourism is now widely recognized. However, no research has so far addressed weather events from retrospective viewpoints, and, in particular, the role of “extreme” events in longer-term holiday memories. To better understand the character of ex post weather experiences and their importance in destination image perceptions and future travel planning behavior, this exploratory study addressed a sample of 50 tourists from three globally important source markets: Austria, Germany and Switzerland. Results indicate that weather events do not dominate long-term memories of tourist experiences. Yet, weather events are important in shaping destination image, with “rain” being the single most important weather variable negatively influencing perceptions. Results also suggest that weather events perceived as extreme can involve considerable emotions. The study of ex post traveler memories consequently makes a valuable contribution to the understanding of the complexity of “extreme weather” events for tourist demand responses.
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
International Nuclear Information System (INIS)
Kodeli, Ivan-Alexander
2005-01-01
The new cross-section covariance matrix library ZZ-VITAMIN-J/COVA/EFF3 intended to simplify and encourage sensitivity and uncertainty analysis was prepared and is available from the NEA Data Bank. The library is organised in a ready-to-use form including both the covariance matrix data as well as processing tools:-Cross-section covariance matrices from the EFF-3 evaluation for five materials: 9 Be, 28 Si, 56 Fe, 58 Ni and 60 Ni. Other data will be included when available. -FORTRAN program ANGELO-2 to extrapolate/interpolate the covariance matrices to a users' defined energy group structure. -FORTRAN program LAMBDA to verify the mathematical properties of the covariance matrices, like symmetry, positive definiteness, etc. The preparation, testing and use of the covariance matrix library are presented. The uncertainties based on the cross-section covariance data were compared with those based on other evaluations, like ENDF/B-VI. The collapsing procedure used in the ANGELO-2 code was compared and validated with the one used in the NJOY system
Using causal maps to support ex-post assessment of social impacts of dams
International Nuclear Information System (INIS)
Aledo, Antonio; García-Andreu, Hugo; Pinese, José
2015-01-01
- Highlights: • We defend the usefulness of causal maps (CM) for ex-post impact assessment of dams. • Political decisions are presented as unavoidable technical measures. • CM enable the identification of multiple causes involved in the dam impacts. • An alternative management of the dams is shown from the precise tracking of the causes. • Participatory CM better the quality of information and the governance of the research. This paper presents the results of an ex-post assessment of two important dams in Brazil. The study follows the principles of Social Impact Management, which offer a suitable framework for analyzing the complex social transformations triggered by hydroelectric dams. In the implementation of this approach, participative causal maps were used to identify the ex-post social impacts of the Porto Primavera and Rosana dams on the community of Porto Rico, located along the High Paraná River. We found that in the operation of dams there are intermediate causes of a political nature, stemming from decisions based on values and interests not determined by neutral, exclusively technical reasons; and this insight opens up an area of action for managing the negative impacts of dams
Using causal maps to support ex-post assessment of social impacts of dams
Energy Technology Data Exchange (ETDEWEB)
Aledo, Antonio, E-mail: Antonio.Aledo@ua.es [Departamento de Sociología 1, Universidad de Alicante, Alicante 03080 (Spain); García-Andreu, Hugo, E-mail: Hugo.Andreu@ua.es [Departamento de Sociología 1, Universidad de Alicante, Alicante 03080 (Spain); Pinese, José, E-mail: pinese@uel.br [Centro de Ciências Exatas, UEL, Rodovia Celso Cid, Km 380, Campus Universitário, Londrina, PR 86057-970 (Brazil)
2015-11-15
- Highlights: • We defend the usefulness of causal maps (CM) for ex-post impact assessment of dams. • Political decisions are presented as unavoidable technical measures. • CM enable the identification of multiple causes involved in the dam impacts. • An alternative management of the dams is shown from the precise tracking of the causes. • Participatory CM better the quality of information and the governance of the research. This paper presents the results of an ex-post assessment of two important dams in Brazil. The study follows the principles of Social Impact Management, which offer a suitable framework for analyzing the complex social transformations triggered by hydroelectric dams. In the implementation of this approach, participative causal maps were used to identify the ex-post social impacts of the Porto Primavera and Rosana dams on the community of Porto Rico, located along the High Paraná River. We found that in the operation of dams there are intermediate causes of a political nature, stemming from decisions based on values and interests not determined by neutral, exclusively technical reasons; and this insight opens up an area of action for managing the negative impacts of dams.
Financing long-term care: ex ante, ex post or both?
Costa-Font, Joan; Courbage, Christophe; Swartz, Katherine
2015-03-01
This paper attempts to examine the heterogeneity in the public financing of long-term care (LTC) and the wide-ranging instruments in place to finance LTC services. We distinguish and classify the institutional responses to the need for LTC financing as ex ante (occurring prior to when the need arises, such as insurance) and ex post (occurring after the need arises, such as public sector and family financing). Then, we examine country-specific data to ascertain whether the two types of financing are complements or substitutes. Finally, we examine exploratory cross-national data on public expenditure determinants, specifically economic, demographic and social determinants. We show that although both ex ante and ex post mechanisms exist in all countries with advanced industrial economies and despite the fact that instruments are different across countries, ex ante and ex post instruments are largely substitutes for each other. Expenditure estimates to date indicate that the public financing of LTC is highly sensitive to a country's income, ageing of the population and the availability of informal caregiving. Copyright © 2015 John Wiley & Sons, Ltd.
Litvinenko, Alexander
2017-01-01
and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters
DEFF Research Database (Denmark)
Hounyo, Ulrich
to a gneral class of estimators of integrated covolatility. We then show the first-order asymptotic validity of this method in the multivariate context with a potential presence of jumps, dependent microsturcture noise, irregularly spaced and non-synchronous data. Due to our focus on non...... covariance estimator. As an application of our results, we also consider the bootstrap for regression coefficients. We show that the wild blocks of bootstrap, appropriately centered, is able to mimic both the dependence and heterogeneity of the scores, thus justifying the construction of bootstrap percentile...... intervals as well as variance estimates in this context. This contrasts with the traditional pairs bootstrap which is not able to mimic the score heterogeneity even in the simple case where no microsturcture noise is present. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves...
Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix
International Nuclear Information System (INIS)
Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.
2012-01-01
An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)
Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix
International Nuclear Information System (INIS)
Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro
2013-01-01
An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)
Orbit covariance propagation via quadratic-order state transition matrix in curvilinear coordinates
Hernando-Ayuso, Javier; Bombardelli, Claudio
2017-09-01
In this paper, an analytical second-order state transition matrix (STM) for relative motion in curvilinear coordinates is presented and applied to the problem of orbit uncertainty propagation in nearly circular orbits (eccentricity smaller than 0.1). The matrix is obtained by linearization around a second-order analytical approximation of the relative motion recently proposed by one of the authors and can be seen as a second-order extension of the curvilinear Clohessy-Wiltshire (C-W) solution. The accuracy of the uncertainty propagation is assessed by comparison with numerical results based on Monte Carlo propagation of a high-fidelity model including geopotential and third-body perturbations. Results show that the proposed STM can greatly improve the accuracy of the predicted relative state: the average error is found to be at least one order of magnitude smaller compared to the curvilinear C-W solution. In addition, the effect of environmental perturbations on the uncertainty propagation is shown to be negligible up to several revolutions in the geostationary region and for a few revolutions in low Earth orbit in the worst case.
Directory of Open Access Journals (Sweden)
Zhengyan Zhang
2018-03-01
Full Text Available In this paper, we consider the problem of tracking the direction of arrivals (DOA and the direction of departure (DOD of multiple targets for bistatic multiple-input multiple-output (MIMO radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.
Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo
2018-03-07
In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Determinantes do spread bancário ex post no mercado brasileiro
Directory of Open Access Journals (Sweden)
José Alves Dantas
2012-08-01
Full Text Available A rentabilidade dos bancos é geralmente considerada um fator relevante para garantir a solidez do sistema financeiro, reduzindo os riscos associados aos eventos de insolvência nesse setor. No Brasil, porém, tem havido discussões quanto aos lucros das instituições financeiras que atuam no país, centradas no argumento de que tais lucros seriam supostamente muito elevados, onerando demasiadamente o setor produtivo. Por isso, diversos estudos têm avaliado a estrutura, a evolução e os determinantes do spread bancário, que é considerada a principal variável responsável pelos lucros supostamente anormais. Do ponto de vista metodológico, essas pesquisas têm se concentrado em investigar o spread ex ante das operações com recursos livres e têm utilizado fatores macroeconômicos como variáveis independentes. Este estudo busca identificar variáveis determinantes do spread bancário ex post, privilegiando variáveis explanatórias específicas das instituições, vale dizer, microeconômicas. Na literatura sobre determinantes do spread bancário ex post no Brasil, foi identificado apenas um trabalho anterior, o qual apresentou resultados pouco representativos, devido a um problema de micronumerosidade. Para evitar tal problema, este estudo utiliza dados dos balancetes de janeiro/2000 a outubro/2009 de instituições bancárias com carteira de crédito ativa. Utilizando um modelo de regressão com dados em painel dinâmico, são testadas nove hipóteses, constatando-se que o nível de spread ex post tem relação significativa e: 1. Positiva com o risco de crédito da carteira, com o grau de concentração do mercado de crédito e com o nível de atividade da economia; 2. negativa com a participação relativa da instituição no mercado de crédito. Por outro lado, não foram encontradas relações estatisticamente relevantes entre o spread ex post e: o nível de cobertura das despesas administrativas pelas receitas de prestações de
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
Ex post and ex ante willingness to pay (WTP) for the ICT Malaria Pf/Pv test kit in Myanmar.
Cho-Min-Naing; Lertmaharit, S; Kamol-Ratanakul, P; Saul, A J
2000-03-01
Willingness to pay (WTP) for the ICT Malaria Pf/Pv test kit was assessed by the contingent valuation method using a bidding game approach in two villages in Myanmar. Kankone (KK) village has a rural health center (RHC) and Yae-Aye-Sann (YAS) is serviced by community health worker (CHW). The objectives were to assess WTP for the ICT Malaria Pf/Pv test kit and to determine factors affecting the WTP. In both villages WTP was assessed in two different conditions, ex post and ex ante. The ex post WTP was assessed at an RHC in the KK village and at the residence of a CHW in the YAS village on patients immediately following diagnosis of malaria. The ex ante WTP was assessed by household interviews in both villages on people with a prior history of malaria. Ordinary least squares (OLS) multiple regression analysis was used to analyze factors affecting WTP. The WTP was higher in ex post conditions than ex ante in both villages. WTP was significantly positively associated with the average monthly income of the respondents and severity of illness in both ex post and ex ante conditions (p WTP (p < 0.05) in the ex post condition in the RHC survey in KK village.
Ex-post evaluation. Research independency of the basic science study of JAERI
International Nuclear Information System (INIS)
Yanagisawa, Kazuaki; Takahashi, Shoji
2010-06-01
A research independency was defined here as the continuity and the development of a corresponding research field with an evolution of history. The authors took three fields as research parameters for the ex-post evaluation. They were all belonged to the basic science field studied in the Japan Atomic Energy Research Institute (JAERI). The first parameter was actinides, which was situated in the center of research networking from the viewpoint of socio-economy. The second parameter was positron, which was situated in the periphery of research networking and the third one was neutron, which had competition with other research organizations in Japan. The three were supported and promoted financially by the JAERI. The target year was covered from 1978 to 2002, a 25-years. INIS (International Nuclear Information Systems) operated by the International Atomic Energy Agency (IAEA) was used as the tool for the present bibliometric study. It was revealed that important factors that led the sustainable success of the research independency were the constant efforts to accomplish their mission, the education of their successors to instructing the explicit and tacit research findings and the construction of intellectual networking with learned circles and industries, those were in good collaboration with JAERI. These were quantitatively clarified. Conversely, main factors that impeded the development of the research independency were discontinuance of research caused by a retirement, a change of post or that of occupation, and an unexpected accident (death) of the core researchers. Among three parameters, the authors confirmed that there occurred the time-dependent stage of germination, development and declination of the research independency attributing to the interaction between the succession factors and impeded factors. For this kind of ex-post evaluation, the support of field research laboratory was inevitable. (author)
International Nuclear Information System (INIS)
Webber, Phil; Gouldson, Andy; Kerr, Niall
2015-01-01
There is widespread interest in the ability of retrofit schemes to shape domestic energy use in order to tackle fuel poverty and reduce carbon emissions. Although much has been written on the topic, there have been few large-scale ex post evaluations of the actual impacts of such schemes. We address this by assessing domestic energy use before and after the Kirklees Warm Zone (KWZ) scheme, which by fitting insulation in 51,000 homes in the 2007–2010 period is one of the largest retrofit schemes completed in the UK to date. To do this, we develop and apply a new methodology that isolates the impacts of retrofit activity from broader background trends in energy use. The results suggest that the actual impacts of the KWZ scheme have been higher than predicted, and that the scale of any performance gaps or rebound effects have been lower than has often been assumed. They also suggest that impacts on energy use in lower income areas are consistent with predictions, but that impacts in middle and higher income areas are higher than predicted. These findings support the case for the wider and/or accelerated adoption of domestic retrofit schemes in other contexts. -- Highlights: •A large scale, ex post evaluation of the impacts of a household retrofit scheme. •A new methodology to separate retrofit impacts from background trends. •Shows impacts of retrofit have been 1.2–1.7 times higher than predicted. •Impacts as predicted in lower income areas, higher in middle and upper income areas. •Findings support the case for the wider and faster adoption of domestic retrofit
Ex post power economic analysis of record of decision operational restrictions at Glen Canyon Dam.
Energy Technology Data Exchange (ETDEWEB)
Veselka, T. D.; Poch, L. A.; Palmer, C. S.; Loftin, S.; Osiek, B; Decision and Information Sciences; Western Area Power Administration
2010-07-31
On October 9, 1996, Bruce Babbitt, then-Secretary of the U.S. Department of the Interior signed the Record of Decision (ROD) on operating criteria for the Glen Canyon Dam (GCD). Criteria selected were based on the Modified Low Fluctuating Flow (MLFF) Alternative as described in the Operation of Glen Canyon Dam, Colorado River Storage Project, Arizona, Final Environmental Impact Statement (EIS) (Reclamation 1995). These restrictions reduced the operating flexibility of the hydroelectric power plant and therefore its economic value. The EIS provided impact information to support the ROD, including an analysis of operating criteria alternatives on power system economics. This ex post study reevaluates ROD power economic impacts and compares these results to the economic analysis performed prior (ex ante) to the ROD for the MLFF Alternative. On the basis of the methodology used in the ex ante analysis, anticipated annual economic impacts of the ROD were estimated to range from approximately $15.1 million to $44.2 million in terms of 1991 dollars ($1991). This ex post analysis incorporates historical events that took place between 1997 and 2005, including the evolution of power markets in the Western Electricity Coordinating Council as reflected in market prices for capacity and energy. Prompted by ROD operational restrictions, this analysis also incorporates a decision made by the Western Area Power Administration to modify commitments that it made to its customers. Simulated operations of GCD were based on the premise that hourly production patterns would maximize the economic value of the hydropower resource. On the basis of this assumption, it was estimated that economic impacts were on average $26.3 million in $1991, or $39 million in $2009.
Beetsma, Roel; Bluhm, Benjamin; Giuliodori, Massimo; Wierts, Peter
2013-01-01
This paper splits the ex post error in the budget balance, defined as the final budget figure minus the planned figure, into implementation and revision errors, and investigates the determinants of these errors. The implementation error is the difference between the nowcast, published toward the end
Beetsma, R.; Bluhm, B.; Giuliodori, M.; Wierts, P.
2013-01-01
This paper splits the ex post error in the budget balance, defined as the final budget figure minus the planned figure, into implementation and revision errors, and investigates the determinants of these errors. The implementation error is the difference between the nowcast, published toward the end
From first-release to ex post fiscal data: exploring the sources of revision errors in the EU
Beetsma, R.; Bluhm, B.; Giuliodori, M.; Wierts, P.
2012-01-01
This paper explores the determinants of deviations of ex post budget outcomes from firstrelease outcomes published towards the end of the year of budget implementation. The predictive content of the first-release outcomes is important, because these figures are an input for the next budget and the
Bouchoucha, Taha; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2017-01-01
optimization problems. The computational complexity of these algorithms is very high, which makes them difficult to use in practice. In this paper, to achieve the desired beampattern, a low complexity discrete-Fourier-transform based closed-form covariance
Overcoming Ex-Post Development Stagnation: Interventions with Continuity and Scaling in Mind
Directory of Open Access Journals (Sweden)
Bradley T. Hiller
2016-02-01
Full Text Available Project interventions are important vehicles for development globally. However, while there is often allocation of resources for new and innovative (pilot projects—with varying levels of success—there is seemingly less focus on consolidating and/or scaling the positive impacts of successful larger interventions. Assuming an overarching development goal to have long lasting impact at scale, this approach seems somewhat contradictory. Scaling is often not integrated into project planning, design and implementation and rarely pursued genuinely in the ex-post. However, where demand for further development remains outstanding beyond project completion, opportunities may exist to build upon project platforms and extend benefits in a cost effective manner. This paper examines existing scaling typologies, before introducing “scaling-within” as a concept to promote greater continuity of development to a wider range of stakeholders. Scaling-within offers the opportunity to “in-fill” intervention principles and practices to both project and non-project communities within a broader strategic framework to address disparities and to promote sustainable development. The authors draw on research from case studies of large-scale integrated watershed rehabilitation projects and assess scaling-within against a contemporary scaling framework drawn from the literature. While the concept is tested with watersheds as the administrative unit, the authors anticipate applications for other project management units.
Analysis of international content of ranked nursing journals in 2005 using ex post facto design.
Dougherty, Molly C; Lin, Shu-Yuan; McKenna, Hugh P; Seers, Kate; Keeney, Sinead
2011-06-01
The purpose of this study was to examine articles in ISI-ranked nursing journals and to analyse the articles and journals, using definitions of international and article content. Growing emphasis on global health includes attention on international nursing literature. Contributions from Latin America and Africa have been reported. Attention to ranked nursing journals to support scholarship in global health is needed. Using an ex post facto design, characteristics of 2827 articles, authors and journals of 32 ranked nursing journals for the year 2005 were analysed between June 2006 and June 2007. Using definitions of international and of article content, research questions were analysed statistically. (a) 928 (32·8%) articles were international; (b) 2016 (71·3%) articles were empirical or scholarly; (c) 826 (89·3%) articles reflecting international content were scholarly or empirical; (d) among international articles more were empirical (66·3 % vs. 32·8 %; χ(2) ((1)) = 283·6, P international articles more were scholarly (29·2 % vs. 22·7 %; χ(2) ((1)) = 15·85, P international, based on author characteristics; (f) 20 (62·5 %) journals were led by an international editorial team; and (g) international journals had more international articles (3·6 % vs. 29·2 %; χ(2) ((1)) = 175·75, P international journals (t = -14·43, P international journals. Results indicate the need to examine the international relevance of the nursing literature. © 2011 Blackwell Publishing Ltd.
Directory of Open Access Journals (Sweden)
Cathy Suykens
2016-12-01
Full Text Available There is a wealth of literature on the design of ex post compensation mechanisms for natural disasters. However, more research needs to be done on the manner in which these mechanisms could steer citizens toward adopting individual-level preventive and protection measures in the face of flood risks. We have provided a comparative legal analysis of the financial compensation mechanisms following floods, be it through insurance, public funds, or a combination of both, with an empirical focus on Belgium, the Netherlands, England, and France. Similarities and differences between the methods in which these compensation mechanisms for flood damages enhance resilience were analyzed. The comparative analysis especially focused on the link between the recovery strategy on the one hand and prevention and mitigation strategies on the other. There is great potential within the recovery strategy for promoting preventive action, for example in terms of discouraging citizens from living in high-risk areas, or encouraging the uptake of mitigation measures, such as adaptive building. However, this large potential has yet to be realized, in part because of insufficient consideration and promotion of these connections within existing legal frameworks. We have made recommendations about how the linkages between strategies can be further improved. These recommendations relate to, among others, the promotion of resilient reinstatement through recovery mechanisms and the removal of legal barriers preventing the establishment of link-inducing measures.
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Leif E. Peterson
1997-11-01
Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.
Directory of Open Access Journals (Sweden)
Z. Khodadadi
2008-03-01
Full Text Available Let S be matrix of residual sum of square in linear model Y = Aβ + e where matrix e is distributed as elliptically contoured with unknown scale matrix Σ. In present work, we consider the problem of estimating Σ with respect to squared loss function, L(Σˆ , Σ = tr(ΣΣˆ −1 −I 2 . It is shown that improvement of the estimators were obtained by James, Stein [7], Dey and Srivasan [1] under the normality assumption remains robust under an elliptically contoured distribution respect to squared loss function
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
DEFF Research Database (Denmark)
Kinnebrock, Silja; Podolskij, Mark
This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis...... process can be relaxed and how our method can be applied to non-synchronous observations. We also present an empirical study of how high-frequency correlations, regressions and covariances change through time....
Aymerich, Marta; Carrion, Carme; Gallo, Pedro; Garcia, Maria; López-Bermejo, Abel; Quesada, Miquel; Ramos, Rafel
2012-08-01
Most ex-post evaluations of research funding programs are based on bibliometric methods and, although this approach has been widely used, it only examines one facet of the project's impact, that is, scientific productivity. More comprehensive models of payback assessment of research activities are designed for large-scale projects with extensive funding. The purpose of this study was to design and implement a methodology for the ex-post evaluation of small-scale projects that would take into account both the fulfillment of projects' stated objectives as well as other wider benefits to society as payback measures. We used a two-phase ex-post approach to appraise impact for 173 small-scale projects funded in 2007 and 2008 by a Spanish network center for research in epidemiology and public health. In the internal phase we used a questionnaire to query the principal investigator (PI) on the outcomes as well as actual and potential impact of each project; in the external phase we sent a second questionnaire to external reviewers with the aim of assessing (by peer-review) the performance of each individual project. Overall, 43% of the projects were rated as having completed their objectives "totally", and 40% "considerably". The research activities funded were reported by PIs as socially beneficial their greatest impact being on research capacity (50% of payback to society) and on knowledge translation (above 11%). The method proposed showed a good discriminating ability that makes it possible to measure, reliably, the extent to which a project's objectives were met as well as the degree to which the project contributed to enhance the group's scientific performance and of its social payback. Copyright © 2012 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Scheer, J.; Clancy, M.; Ni Hogain, S. [Sustainable Energy Authority of Ireland, Wilton Park House, Wilton Terrace, Dublin 2 (Ireland)
2013-02-15
This paper quantifies the energy savings realised by a sample of participants in the Sustainable Energy Authority of Ireland's Home Energy Saving (HES) residential retrofit scheme (currently branded as the Better Energy Homes scheme), through an ex post billing analysis. The billing data are used to evaluate: (1) the reduction in gas consumption of the sample between pre- (2008) and post- (2010) scheme participation when compared to the gas consumption of a control group, (2) an estimate of the shortfall when this result is compared to engineering-type ex ante savings estimates and (3) the degree to which these results may apply to the wider population. All dwellings in the study underwent energy efficiency improvements, including insulation upgrades (wall and/or roof), installation of high-efficiency boilers and/or improved heating controls, as part of the HES scheme. Metered gas use data for the 210 households were obtained from meter operators for a number of years preceding dwelling upgrades and for a post-intervention period of 1 year. Dwelling characteristics and some household behavioural data were obtained through a survey of the sample. The gas network operator provided anonymised data on gas usage for 640,000 customers collected over the same period as the HES sample. Dwelling type data provided with the population dataset enabled matching with the HES sample to increase the internal validity of the comparison between the control (matched population data) and the treatment (HES sample). Using a difference-in-difference methodology, the change in demand of the sample was compared with that of the matched population subset of gas-using customers in Ireland over the same time period. The mean reduction in gas demand as a result of energy efficiency upgrades for the HES sample is estimated as 21 % or 3,664{+-}603 kWh between 2008 and 2010. An ex ante estimate of average energy savings, based on engineering calculations (u value reductions and improved boiler
Ex ante and ex post control of the public interest in public-private partnership agreements
Directory of Open Access Journals (Sweden)
Ćirić Aleksandar
2016-01-01
implementing the agreement as well as an effective control of such implementation (ex post methodological aspect. PPP agreements should provide a mechanism for adjusting their contents to changed circumstances, i.e. the social, legal and economic context which pervades the preparation, implementation and realization of the specific PPP project. Among other factors, this flexibility rests on mutual trust and cooperation of the contracting parties. Ultimately, in the context of control over exercising the public interest, the methodological approach of the PPP agreement essentially lies in preventing the public partner to succumb to the temptation of adopting the simplest available solution. Instead, it is necessary to clearly define the expectations which the public body as a title-holder of public interest has in regard of specific PPP projects, and to limit the responsibility of the public actor. The success of a PPP agreement in each particular case depends on the extent to which the agreement provides for adequate treatment of these presumptions.
Processo de tomada de decisões online em consumidores colombianos: um estudo ex-post-facto
Gómez-Díaz, Javier Andrés
2016-01-01
La toma de decisiones de compra en línea (internet) fue revisada retrospectivamente (ex-post facto) con una muestra de 340 personas que habían (n = 187) y no habían comprado por este medio (n = 153). El cuestionario usado incluyó afirmaciones para cada una de las etapas involucradas en la elección (identificación del problema, búsqueda de información, evaluación de alternativas y conducta de compra). Algunas escalas se diseñaron y otras se adaptaron de las que se encuentran disponibles en la ...
Amir F. N. Abdul-Manan; Azizan Baharuddin; Lee Wei Chang
2015-01-01
Ex-post evaluations of energy policies in Malaysia between 1970 and 2010 were conducted. The developments of energy policies in Malaysia were traced from the early 1970s with the introduction of the country’s first energy-related policy all the way to 2010 with the country’s first endeavour towards a biobased energy system. Analyses revealed that many of the policies were either: (1) directly responding to changes in global/domestic socioeconomic and political events, or (2) provided visions ...
Ex-ante and ex-post measurement of equality of opportunity in health: a normative decomposition.
Donni, Paolo Li; Peragine, Vito; Pignataro, Giuseppe
2014-02-01
This paper proposes and discusses two different approaches to the definition of inequality in health: the ex-ante and the ex-post approach. It proposes strategies for measuring inequality of opportunity in health based on the path-independent Atkinson equality index. The proposed methodology is illustrated using data from the British Household Panel Survey; the results suggest that in the period 2000-2005, at least one-third of the observed health equalities in the UK were equalities of opportunity. Copyright © 2013 John Wiley & Sons, Ltd.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, P.R.; Lunde, Asger
2008-01-01
This paper shows how to use realized kernels to carry out efficient feasible inference on the ex post variation of underlying equity prices in the presence of simple models of market frictions. The weights can be chosen to achieve the best possible rate of convergence and to have an asymptotic va......) allow for temporally dependent noise. The finite sample performance of our estimators is studied using simulation, while empirical work illustrates their use in practice.......This paper shows how to use realized kernels to carry out efficient feasible inference on the ex post variation of underlying equity prices in the presence of simple models of market frictions. The weights can be chosen to achieve the best possible rate of convergence and to have an asymptotic...... variance which equals that of the maximum likelihood estimator in the parametric version of this problem. Realized kernels can also be selected to (i) be analyzed using endogenously spaced data such as that in data bases on transactions, (ii) allow for market frictions which are endogenous, and (iii...
Directory of Open Access Journals (Sweden)
S. Ceccherini
2010-03-01
Full Text Available The variance-covariance matrix (VCM and the averaging kernel matrix (AKM are widely used tools to characterize atmospheric vertical profiles retrieved from remote sensing measurements. Accurate estimation of these quantities is essential for both the evaluation of the quality of the retrieved profiles and for the correct use of the profiles themselves in subsequent applications such as data comparison, data assimilation and data fusion. We propose a new method to estimate the VCM and AKM of vertical profiles retrieved using the Levenberg-Marquardt iterative technique. We apply the new method to the inversion of simulated limb emission measurements. Then we compare the obtained VCM and AKM with those resulting from other methods already published in the literature and with accurate estimates derived using statistical and numerical estimators. The proposed method accounts for all the iterations done in the inversion and provides the most accurate VCM and AKM. Furthermore, it correctly estimates the VCM and the AKM also if the retrieval iterations are stopped when a physically meaningful convergence criterion is fulfilled, i.e. before achievement of the numerical convergence at machine precision. The method can be easily implemented in any Levenberg-Marquardt iterative retrieval scheme, either constrained or unconstrained, without significant computational overhead.
Directory of Open Access Journals (Sweden)
Javier Andrés Gómez-Díaz
2016-04-01
Full Text Available A review of making purchase decisions through internet was retrospectively reviewed (ex-post-fact with a sample of 340 people who had (n=187 and who had not purchased online (n=153. The questionnaire that was used includes statement for each of the stages involved in the choice (problem identification, information search, alternatives evaluation, and purchase behavior. Some scales were designed while some others were adapted from the available research literature. Results shows that, through internet, it is more common to perform unplanned purchase, and the information available on the network usually has a significant value in online decision-making. Online purchasers and not purchasers differ on risk perception. Some recommendations to design web pages for commercial use are suggested, and discussion about the evolution of online shopping in Colombia is presented.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
This paper shows how to use realised kernels to carry out efficient feasible inference on the ex-post variation of underlying equity prices in the presence of simple models of market frictions. The issue is subtle with only estimators which have symmetric weights delivering consistent estimators...... with mixed Gaussian limit theorems. The weights can be chosen to achieve the best possible rate of convergence and to have an asymptotic variance which is close to that of the maximum likelihood estimator in the parametric version of this problem. Realised kernels can also be selected to (i) be analysed...... using endogenously spaced data such as that in databases on transactions, (ii) allow for market frictions which are endogenous, (iii) allow for temporally dependent noise. The finite sample performance of our estimators is studied using simulation, while empirical work illustrates their use in practice....
Explorando la relación entre políticas crediticias y resultados de la banca española ex-post.
Directory of Open Access Journals (Sweden)
Francisco Jaime Ibáñez Hernández
2008-05-01
Full Text Available This paper analyzes the relationship between bank credit policies and their ex-post performance. Literature review reveals that, sometimes, credit markets can be affected by a stronger endogenous component that usually assumed. We propose that the growth speed of the bank credit portfolio in expansive cycles is related to its ex-post performance once the recession cycle begins. The analysis outcomes refl ect a strong relation between the speed in the expansion of the credit and a poorer behaviour of the benefi ts, returns and insolvencies.
Kisil, Vladimir V.
2010-01-01
The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H_2, Banach spaces, covariant functional calculus and many others. Keywords: Wavelets, cohe...
Covariance matrices of experimental data
International Nuclear Information System (INIS)
Perey, F.G.
1978-01-01
A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U
Contributions to Large Covariance and Inverse Covariance Matrices Estimation
Kang, Xiaoning
2016-01-01
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...
Westhoek, H.; Berg, van der R.; Hoop, de D.W.; Kamp, van der A.
2004-01-01
This paper summarises the results of both an ex-post evaluation of the Dutch Mineral Accounting System (MINAS) and an ex-ante evaluation of the effect of different levy-free surplus values. The MINAS system has been introduced in 1998 in order to reduce nitrate and phosphate leaching from
International Nuclear Information System (INIS)
Sadeghi, Mahmood; Kalantar, Mohsen
2014-01-01
Highlights: • Defining a DG dynamic planning problem. • Applying a new evolutionary algorithm called “CMAES” in planning process. • Considering electricity price and fuel price variation stochastic conditions. • Scenario generation and reduction with MCS and backward reduction programs. • Considering approximately all of the costs of the distribution system. - Abstract: This paper presents a dynamic DG planning problem considering uncertainties related to the intermittent nature of the DG technologies such as wind turbines and solar units in addition to the stochastic economic conditions. The stochastic economic situation includes the uncertainties related to the fuel and electricity price of each year. The Monte Carlo simulation is used to generate the possible scenarios of uncertain situations and the produced scenarios are reduced through backward reduction program. The aim of this paper is to maximize the revenue of the distribution system through the benefit cost analysis alongside the encouraging and punishment functions. In order to close to reality, the different growth rates for the planning period are selected. In this paper the Covariance Matrix Adaptation Evolutionary Strategy is introduced and is used to find the best planning scheme of the DG units. The different DG types are considered in the planning problem. The main assumption of this paper is that the DISCO is the owner of the distribution system and the DG units. The proposed method is tested on a 9 bus test distribution system and the results are compared with the known genetic algorithm and PSO methods to show the applicability of the CMAES method in this problem
The Bayesian Covariance Lasso.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G
2013-04-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.
Directory of Open Access Journals (Sweden)
MK Hasan
2014-06-01
Full Text Available The study estimated the benefit and rates of returns to investment on turmeric research and development in Bangladesh. The Economic Surplus Model with ex-post analysis was used to determine the returns to investment and their distribution between the production and consumption. Several discounting techniques were also used to assess the efficiency of turmeric research. The adoption rate was found increasing trend over the period. The yield of BARI developed modern varieties of turmeric was 41 to 73% higher than those of the local variety. Society got net benefit Tk. 9333.88 million from the investment of turmeric research and extension. The net present value (NPV and present value of research cost (PVRC were estimated at Tk. 1200.84 and 157.88, respectively. The internal rate of return (IRR and benefit cost ratio (BCR were estimated to be 68% and 10.45, respectively indicated investment on turmeric research and development was a good and profitable investment. Seed production programme of turmeric should be taken largely to increase production by increasing area adoption.
Directory of Open Access Journals (Sweden)
Alev Dilek Aydin
2015-01-01
Full Text Available The ANN method has been applied by means of multilayered feedforward neural networks (MLFNs by using different macroeconomic variables such as the exchange rate of USD/TRY, gold prices, and the Borsa Istanbul (BIST 100 index based on monthly data over the period of January 2000 and September 2014 for Turkey. Vector autoregressive (VAR method has also been applied with the same variables for the same period of time. In this study, different from other studies conducted up to the present, ENCOG machine learning framework has been used along with JAVA programming language in order to constitute the ANN. The training of network has been done by resilient propagation method. The ex post and ex ante estimates obtained by the ANN method have been compared with the results obtained by the econometric forecasting method of VAR. Strikingly, our findings based on the ANN method reveal that there is a possibility of financial distress or a financial crisis in Turkey starting from October 2017. The results which were obtained with the method of VAR also support the results of ANN method. Additionally, our results indicate that the ANN approach has more superior prediction performance than the VAR method.
Aydin, Alev Dilek; Caliskan Cavdar, Seyma
2015-01-01
The ANN method has been applied by means of multilayered feedforward neural networks (MLFNs) by using different macroeconomic variables such as the exchange rate of USD/TRY, gold prices, and the Borsa Istanbul (BIST) 100 index based on monthly data over the period of January 2000 and September 2014 for Turkey. Vector autoregressive (VAR) method has also been applied with the same variables for the same period of time. In this study, different from other studies conducted up to the present, ENCOG machine learning framework has been used along with JAVA programming language in order to constitute the ANN. The training of network has been done by resilient propagation method. The ex post and ex ante estimates obtained by the ANN method have been compared with the results obtained by the econometric forecasting method of VAR. Strikingly, our findings based on the ANN method reveal that there is a possibility of financial distress or a financial crisis in Turkey starting from October 2017. The results which were obtained with the method of VAR also support the results of ANN method. Additionally, our results indicate that the ANN approach has more superior prediction performance than the VAR method.
Directory of Open Access Journals (Sweden)
Amir F. N. Abdul-Manan
2015-03-01
Full Text Available Ex-post evaluations of energy policies in Malaysia between 1970 and 2010 were conducted. The developments of energy policies in Malaysia were traced from the early 1970s with the introduction of the country’s first energy-related policy all the way to 2010 with the country’s first endeavour towards a biobased energy system. Analyses revealed that many of the policies were either: (1 directly responding to changes in global/domestic socioeconomic and political events, or (2 provided visions to guide developments of the energy sector in alignment with the country’s growth agenda. Critical examinations of the country’s actual energy consumptions during these 40 years were also conducted to evaluate the efficacy of these energy-related policies. Three noteworthy successes in Malaysia’s energy landscape are: (1 the formation of PETRONAS as the national oil and gas company; (2 reduction of country’s over-reliance on oil as a single source of energy by significantly growing the production and use of natural gas in a short span of time; and (3 creation of a thriving oil and gas value chain and ecosystem in the country. However, the country is still critically dependent on scarce petroleum resources, despite having an abundance of renewable reserves. Progress towards renewable energy has been too little and too slow.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Directory of Open Access Journals (Sweden)
Uguru Nkoli P
2010-01-01
Full Text Available Abstract Background The introduction of rapid diagnostic tests (RDTs has improved the diagnosis and treatment of malaria. However, any successful control of malaria will depend on socio-economic factors that influence its management in the community. Willingness to pay (WTP is important because consumer responses to prices will influence utilization of services and revenues collected. Also the consumer's attitude can influence monetary valuation with respect to different conditions ex post and ex ante. Methods WTP for RDT for Malaria was assessed by the contingent valuation method using a bidding game approach in rural and urban communities in southeast Nigeria. The ex post WTP was assessed at the health centers on 618 patients immediately following diagnosis of malaria with RDT and the ex ante WTP was assessed by household interviews on 1020 householders with a prior history of malaria. Results For the ex ante WTP, 51% of the respondents in urban and 24.7% in rural areas were willing to pay for RDT. The mean WTP (235.49 naira in urban is higher than WTP (182.05 Naira in rural areas. For the ex post WTP, 89 and 90.7% of the respondents in urban and rural areas respectively were WTP. The mean WTP (372.30 naira in urban is also higher than (296.28 naira in rural areas. For the ex post scenario, the lower two Social Economic Status (SES quartiles were more willing to pay and the mean WTP is higher than the higher two SES while in the ex ante scenario, the higher two SES quartiles were more WTP and with a higher WTP than the lower two SES quartile. Ex ante and ex post WTP were directly dependent on costs. Conclusion The ex post WTP is higher than the ex ante WTP and both are greater than the current cost of RDTs. Urban dwellers were more willing to pay than the rural dwellers. The mean WTP should be considered when designing suitable financial strategies for making RDTs available to communities.
International Nuclear Information System (INIS)
Gil, Hugo A.; Gomez-Quiles, Catalina; Riquelme, Jesus
2012-01-01
The integration of large-scale wind power has brought about a series of challenges to the power industry, but at the same time a number of benefits are being realized. Among those, the ability of wind power to cause a decline in the electricity market prices has been recognized. In quantifying this effect, some models used in recent years are based on simulations of the market supply-side and the price clearing process. The accuracy of the estimates depend on the quality of the input data, the veracity of the adopted scenarios and the rigorousness of the solution technique. In this work, a series of econometric techniques based on actual ex post wind power and electricity price data are implemented for the estimation of the impact of region-wide wind power integration on the local electricity market clearing prices and the trading savings that stem from this effect. The model is applied to the case of Spain, where the estimated savings are compared against actual credit and bonus expenses to ratepayers. The implications and extent of these results for current and future renewable energy policy-making are discussed. - Highlights: ► Wholesale electricity market trading benefits by wind power are quantified. ► Actual wind power forecast-based bids and electricity price data from Spain are used. ► Different econometric tools are used and compared for improved estimation accuracy. ► Estimated benefits outweigh current credit overhead paid to wind farms in Spain. ► An economically efficient benefit surplus allocation framework is proposed.
Modeling Covariance Breakdowns in Multivariate GARCH
Jin, Xin; Maheu, John M
2014-01-01
This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and impl...
International Nuclear Information System (INIS)
Harmelink, Mirjam; Joosen, Suzanne; Blok, Kornelis
2005-01-01
The challenge within ex-post policy evaluation research is to unravel the whole policy process and evaluate the effect and effectiveness of the different steps. Through this unravelling of the policy implementation process, insight is gained on where something went wrong in the process of policy design and implementation and where the keys are for improving the effectiveness and efficiency. This article presents the results of an ex-post policy evaluation of the effect and effectiveness of the Energy Premium Regulation scheme and the Long Term Voluntary agreements to reduce CO 2 emissions in the built environment in the Netherlands applying the theory-based policy evaluation method. The article starts with a description of the theory-based policy evaluation method. The method begins with the formulation of a program theory, which describes the 'ideal' operation of a policy instrument, from the viewpoint of the policy makers. Thereupon the theory is checked and adapted through interviews with policy makers and executors, and the cause and effect chain is finally translated to (quantitative) indicators. The article shows that the theory-based evaluation method has benefits over other ex-post evaluation methods that include: The whole policy implementation process is evaluated and the focus is not just on the 'end-result' (i.e. efficiency improvement and CO 2 emission reduction). Through the development of indicators for each step in the implementation process, 'successes and failures' are quantified to the greatest possible extent. By applying this approach we not only learn whether policies are successful or not, but also why they succeeded or failed and how they can be improved
International Nuclear Information System (INIS)
Morel, Romain; Shishlov, Igor
2014-05-01
Signed in 1997, following the 1992 United Nations Framework Convention on Climate Change (UNFCCC), the Kyoto Protocol (KP) is the first international tool focused on greenhouse gas (GHG) mitigation involving as many countries: in its final configuration, thirty-six developed countries committed to reduce their emissions by 4% between 1990 and 2008-2012 - the first commitment period (CP1). In April 2014, the data from the CP1 was officially published. This report thus presents the first comprehensive ex-post analysis of the first period of the KP. In terms of emission reductions - and the effectiveness of the agreement - countries party to the protocol globally surpassed their commitment reducing their emissions by 24%. While positive, this 'over-achievement' appears to be mainly due to the highly-criticized 'hot air' - or the emission reductions that already occurred in economies in transition before 1997 - equivalent to 18.5% of total base-year emissions. Nevertheless, other developed countries would have complied even without the 'hot air', as they have globally seen economic growth coupled with declining emissions. This low-carbon growth can be explained by better primary energy-mix, the continued expansion of the service sector, declining GHG intensity of industries and out-sourcing the production of goods overseas. Despite a low need to use flexibility mechanisms, KP countries actively embraced all of them. Based on the results of this report, it is possible to draw four key lessons from the Kyoto experience for the establishment of a new global agreement that is expected to be signed in Paris in 2015: 1. The GHG emission coverage of the KP was insufficient to stop the growth of global GHG emissions. Thus, expanding the coverage is a priority. The KP included rules tailored for specific sectors' or countries' contexts that helped ensure their participation. In that perspective, it can be strategic to implement specific
Quality Quantification of Evaluated Cross Section Covariances
International Nuclear Information System (INIS)
Varet, S.; Dossantos-Uzarralde, P.; Vayatis, N.
2015-01-01
Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the 85 Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak
2017-01-01
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix
Directory of Open Access Journals (Sweden)
Xiangdong Chen
2017-09-01
Full Text Available The Second Board Market is typical stock market for high tech companies in China. This paper discusses the relationship between trading volume and price changes in the case of high-tech listed companies in the Chinese Second-Board Stock Market. By using the basic concepts proposed by Kim and Verrecchia, and Kandel and Pearson, and contrasting them with ex-post information from earnings releases, the paper provides findings on the speculative behavior of informed traders with a volume shock premium. The paper suggests that these methods may be further applied to investigating investors’ behavior in speculation, especially for the high-tech-company-based Second-Board Stock Market during announcement periods.
International Nuclear Information System (INIS)
Harmelink, Mirjam; Harmsen, Robert; Nilsson, Lars
2007-01-01
This article presents the results of an in-depth ex-post analysis of 20 energy efficiency policy instruments applied across different sectors and countries. Within the AID-EE project, we reconstructed and analysed the implementation process of energy efficiency policy instruments with the aim to identify key factors behind successes and failures. The analysis was performed using a uniform methodology called 'theory based policy evaluation'. With this method the whole implementation process is assessed with the aim to identify: (i) the main hurdles in each step of the implementation process, (ii) key success factors for different types of instruments and (iii) the key indicators that need to be monitored to enable a sound evaluation of the energy efficiency instruments. Our analysis shows that: Energy efficiency policies often lack quantitative targets and clear timeframes; Often policy instruments have multiple and/or unclear objectives; The need for monitoring information does often not have priority in the design phase; For most instruments, monitoring information is collected on a regular basis. However, this information is often insufficient to determine the impact on energy saving, cost-effectiveness and target achievement of an instrument; Monitoring and verification of actual energy savings have a relatively low priority for most of the analyzed instruments. There is no such thing as the 'best' policy instrument. However, typical circumstances in which to apply different types of instruments and generic characteristics that determine success or failure can be identified. Based on the assessments and the experience from applying theory based policy evaluation ex-post, we suggest that this should already be used in the policy formulation and design phase of instruments. We conclude that making policy theory an integral and mandated part of the policy process would facilitate more efficient and effective energy efficiency instruments
Carreño, M. L. (Martha Liliana)
2006-01-01
The objectives of this thesis are: the ex ante seismic risk evaluation for urban centers, the disaster risk management evaluation and the ex post risk evaluation of the damaged buildings after an earthquake. A complete review of the basic concepts and of the most important recent works performed in these fields. These aspects are basic for the development of the new ex ante and ex post seismic risk evaluation approaches which are proposed in this thesis and for the s evaluation of the effecti...
Deriving covariant holographic entanglement
Energy Technology Data Exchange (ETDEWEB)
Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)
2016-11-07
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.; Chen, Min; Maadooliat, Mehdi; Pourahmadi, Mohsen
2012-01-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
International Nuclear Information System (INIS)
Kawano, Toshihiko; Shibata, Keiichi.
1997-09-01
A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of 238 U reaction cross sections were calculated with this system. (author)
On estimating cosmology-dependent covariance matrices
International Nuclear Information System (INIS)
Morrison, Christopher B.; Schneider, Michael D.
2013-01-01
We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys
A special covariance structure for random coefficient models with both between and within covariates
International Nuclear Information System (INIS)
Riedel, K.S.
1990-07-01
We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)
Physical properties of the Schur complement of local covariance matrices
International Nuclear Information System (INIS)
Haruna, L F; Oliveira, M C de
2007-01-01
General properties of global covariance matrices representing bipartite Gaussian states can be decomposed into properties of local covariance matrices and their Schur complements. We demonstrate that given a bipartite Gaussian state ρ 12 described by a 4 x 4 covariance matrix V, the Schur complement of a local covariance submatrix V 1 of it can be interpreted as a new covariance matrix representing a Gaussian operator of party 1 conditioned to local parity measurements on party 2. The connection with a partial parity measurement over a bipartite quantum state and the determination of the reduced Wigner function is given and an operational process of parity measurement is developed. Generalization of this procedure to an n-partite Gaussian state is given, and it is demonstrated that the n - 1 system state conditioned to a partial parity projection is given by a covariance matrix such that its 2 x 2 block elements are Schur complements of special local matrices
Impact of the 235U Covariance Data in Benchmark Calculations
International Nuclear Information System (INIS)
Leal, Luiz C.; Mueller, D.; Arbanas, G.; Wiarda, D.; Derrien, H.
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems
MATXTST, Basic Operations for Covariance Matrices
International Nuclear Information System (INIS)
Geraldo, Luiz P.; Smith, Donald
1989-01-01
1 - Description of program or function: MATXTST and MATXTST1 perform the following operations for a covariance matrix: - test for singularity; - test for positive definiteness; - compute the inverse if the matrix is non-singular; - compute the determinant; - determine the number of positive, negative, and zero eigenvalues; - examine all possible 3 X 3 cross correlations within a sub-matrix corresponding to a leading principal minor which is non-positive definite. While the two programs utilize the same input, the calculational procedures employed are somewhat different and their functions are complementary. The available input options include: i) the full covariance matrix, ii) the basic variables plus the relative covariance matrix, or iii) uncertainties in the basic variables plus the correlation matrix. 2 - Method of solution: MATXTST employs LINPACK subroutines SPOFA and SPODI to test for positive definiteness and to perform further optional calculations. Subroutine SPOFA factors a symmetric matrix M using the Cholesky algorithm to determine the elements of a matrix R which satisfies the relation M=R'R, where R' is the transposed matrix of R. Each leading principal minor of M is tested until the first one is found which is not positive definite. MATXTST1 uses LINPACK subroutines SSICO, SSIFA, and SSIDI to estimate whether the matrix is near to singularity or not (SSICO), and to perform the matrix diagonalization process (SSIFA). The algorithm used in SSIFA is generalization of the Method of Lagrange Reduction. SSIDI is used to compute the determinant and inertia of the matrix. 3 - Restrictions on the complexity of the problem: Matrices of sizes up to 50 X 50 elements can be treated by present versions of the programs
Covariance and sensitivity data generation at ORNL
International Nuclear Information System (INIS)
Leal, L. C.; Derrien, H.; Larson, N. M.; Alpan, A.
2005-01-01
Covariance data are required to assess uncertainties in design parameters in several nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the US Evaluated Nuclear Data Library, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. In this paper we address the generation of covariance data in the resonance region done with the computer code SAMMY. SAMMY is used in the evaluation of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on the generalised least-squares formalism (Bayesian theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, it provides the resonance parameter covariances. For resonance parameter evaluations where there are no resonance parameter covariance data available, the alternative is to use an approach called the 'retroactive' resonance parameter covariance generation. In this paper, we describe the application of the retroactive covariance generation approach for the gadolinium isotopes. (authors)
Székely, Gábor J.; Rizzo, Maria L.
2010-01-01
Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...
New perspective in covariance evaluation for nuclear data
International Nuclear Information System (INIS)
Kanda, Y.
1992-01-01
Methods of nuclear data evaluation have been highly developed during the past decade, especially after introducing the concept of covariance. This makes it utmost important how to evaluate covariance matrices for nuclear data. It can be said that covariance evaluation is just the nuclear data evaluation, because the covariance matrix has quantitatively decisive function in current evaluation methods. The covariance primarily represents experimental uncertainties. However, correlation of individual uncertainties between different data must be taken into account and it can not be conducted without detailed physical considerations on experimental conditions. This procedure depends on the evaluator and the estimated covariance does also. The mathematical properties of the covariance have been intensively discussed. Their physical properties should be studied to apply it to the nuclear data evaluation, and then, in this report, are reviewed to give the base for further development of the covariance application. (orig.)
International Nuclear Information System (INIS)
2016-05-01
This report provides a Peer review of the study 'Ex-post Investigation of Cost Pass- Though in the EU ETS: an analysis of six sectors' produced by CE-Delft and Oeko Institut and published by the EU Commission in November 2015. The study of CE-Delft and Oeko Institut (2015) investigates carbon costs pass-through across iron and steel, refineries, cement, organic basic chemicals, fertilizer, and glass. It also provides estimates of these cost pass-through rates of roughly around 100% for petrochemicals, 80-100% for petrol above 100% for diesel and gas-oil. Our Peer review (on refineries and organic basic chemicals) concludes that the results of the study cannot be used as direct policy recommendations due to the following three main limitations. The robustness of the econometric results is not clearly grounded: the claims of the study are only supported at a low confidence level (the chosen confidence interval was 10% while sound econometric practice would require at least 5%). Moreover detailed analysis shows that the hypothesis of a cost pass-through is significant (at 10%) only for some products and for some EU countries. The results are heterogeneous and not representative of a conclusion at the EU level for either sector. The quantitative estimates on the cost pass-through rates are obtained through a simple accounting relationship between input costs and output prices. Economic practice would rather use an explicit price formation model allowing for such factors as trade intensity, market structure and concentration, heterogeneity of competitors, etc... Some of these factors are marginally analyzed in the study without providing clear conclusions. Indeed, without any in-depth statistical analysis, the study cannot establish a statistical significant relationship between market shares and cost pass-through rates. Other explanatory variables such as the role of EUA price volatility and the macro-economic environment should also have been considered
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
The method of covariant symbols in curved space-time
International Nuclear Information System (INIS)
Salcedo, L.L.
2007-01-01
Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)
Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z
2015-11-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.
COVARIANCE ASSISTED SCREENING AND ESTIMATION.
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-11-01
Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.
Covariant representations of nuclear *-algebras
International Nuclear Information System (INIS)
Moore, S.M.
1978-01-01
Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations
A scale invariant covariance structure on jet space
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2005-01-01
This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...
A three domain covariance framework for EEG/MEG data
Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.
2015-01-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three
Covariant Noncommutative Field Theory
Energy Technology Data Exchange (ETDEWEB)
Estrada-Jimenez, S [Licenciaturas en Fisica y en Matematicas, Facultad de Ingenieria, Universidad Autonoma de Chiapas Calle 4a Ote. Nte. 1428, Tuxtla Gutierrez, Chiapas (Mexico); Garcia-Compean, H [Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN P.O. Box 14-740, 07000 Mexico D.F., Mexico and Centro de Investigacion y de Estudios Avanzados del IPN, Unidad Monterrey Via del Conocimiento 201, Parque de Investigacion e Innovacion Tecnologica (PIIT) Autopista nueva al Aeropuerto km 9.5, Lote 1, Manzana 29, cp. 66600 Apodaca Nuevo Leon (Mexico); Obregon, O [Instituto de Fisica de la Universidad de Guanajuato P.O. Box E-143, 37150 Leon Gto. (Mexico); Ramirez, C [Facultad de Ciencias Fisico Matematicas, Universidad Autonoma de Puebla, P.O. Box 1364, 72000 Puebla (Mexico)
2008-07-02
The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.
Covariant Noncommutative Field Theory
International Nuclear Information System (INIS)
Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.
2008-01-01
The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced
Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
International Nuclear Information System (INIS)
Oblozinsky, P.; Oblozinsky, P.; Mattoon, C.M.; Herman, M.; Mughabghab, S.F.; Pigni, M.T.; Talou, P.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Young, P.G
2009-01-01
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10 -5 eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: 23 Na and 55 Mn where more detailed evaluation was done; improvements in major structural materials 52 Cr, 56 Fe and 58 Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for 23 Na and 56 Fe. LANL contributed improved covariance data for 235 U and 239 Pu including prompt neutron fission spectra and completely new evaluation for 240 Pu. New R-matrix evaluation for 16 O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
International Nuclear Information System (INIS)
Yanagisawa, Kazuaki
2007-09-01
From a viewpoint of ex-post evaluation, research papers published from nine resembled nuclear research institutes located in Japan, the U.S.A., Germany and France were compared by a bibliometric method. A research database used was the Energy Citation Database (ECD) owned by USDOE. ECD is a database run by USDOE and has a high frequency of research paper acquisition assembled in the U.S. Response speed of ECD on the Website is quick and all logged data can be handled easily. INIS database is run by the International Atomic Energy Agency (IAEA) and contains a lot of nuclear research papers collected from the member countries such as the U.S.A., Japan, Germany and France. INIS underestimates about 20% of the U.S.A. data than that of ECD. I. Institutional Comparison. (1) ECD shows that a total number of research papers published during 25 years (1978-2002) was of the order of the ORNL (34, 149 papers)>SNL>ANL>BNL>Idaho (>Karlsruhe>JAERI>Jeulich>Cadarache). Where, INIS shows it as ORNL>JAERI. (2) ECD can show a long-term data comparison with a time span more than 50 years (1953-2002). Disclosed research papers were of the order of the ORNL (55,857)>ANL (37,129)>SNL (24,628)>BNL (24,829)> Idaho (2,398). There were many records loaded without publication dates-over 50,000. Because of this, any searches which use dates are not finding these documents. Typically, the author found over 5,000 SNL items in the NSA range of records. SNL also kept a lot of defense reports, those are not disclosed yet. One had better know a historical background of each cite as to the case for long-range dates comparison. (3) ECD founds that research papers at a five-year period varied those numbers. At past (10), thus 1988-1922, paper reduction occurred sharply at most US-institutes. This might be attributed to lay-offs, funding shifts or complete elimination of programs, a policy change in reporting requirements for contract reporting deliverables. Definitions of what constituted STI (science
Covariance data processing code. ERRORJ
International Nuclear Information System (INIS)
Kosako, Kazuaki
2001-01-01
The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)
Covariant n2-plet mass formulas
International Nuclear Information System (INIS)
Davidson, A.
1979-01-01
Using a generalized internal symmetry group analogous to the Lorentz group, we have constructed a covariant n 2 -plet mass operator. This operator is built as a scalar matrix in the (n;n*) representation, and its SU(n) breaking parameters are identified as intrinsic boost ones. Its basic properties are: covariance, Hermiticity, positivity, charge conjugation, quark contents, and a self-consistent n 2 -1, 1 mixing. The GMO and the Okubo formulas are obtained by considering two different limits of the same generalized mass formula
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Massive data compression for parameter-dependent covariance matrices
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
Covariance Estimation and Autocorrelation of NORAD Two-Line Element Sets
National Research Council Canada - National Science Library
Osweiler, Victor P
2006-01-01
This thesis investigates NORAD two-line element sets (TLE) containing satellite mean orbital elements for the purpose of estimating a covariance matrix and formulating an autocorrelation relationship...
Covariate analysis of bivariate survival data
Energy Technology Data Exchange (ETDEWEB)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Asset allocation with different covariance/correlation estimators
Μανταφούνη, Σοφία
2007-01-01
The subject of the study is to test whether the use of different covariance – correlation estimators than the historical covariance matrix that is widely used, would help in portfolio optimization through the mean-variance analysis. In other words, if an investor would like to use the mean-variance analysis in order to invest in assets like stocks or indices, would it be of some help to use more sophisticated estimators for the covariance matrix of the returns of his portfolio? The procedure ...
Covariance expressions for eigenvalue and eigenvector problems
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
International Nuclear Information System (INIS)
Broc, J.S.
2006-12-01
Energy end-use Efficiency (EE) is a priority for energy policies to face resources exhaustion and to reduce pollutant emissions. At the same time, in France, local level is increasingly involved into the implementation of EE activities, whose frame is changing (energy market liberalization, new policy instruments). Needs for ex-post evaluation of the local EE activities are thus increasing, for regulation requirements and to support a necessary change of scale. Our thesis focuses on the original issue of the ex-post evaluation of local EE operations in France. The state of the art, through the analysis of the American and European experiences and of the reference guidebooks, gives a substantial methodological material and emphasises the key evaluation issues. Concurrently, local EE operations in France are characterized by an analysis of their environment and a work on their segmentation criteria. The combination of these criteria with the key evaluation issues provides an analysis framework used as the basis for the composition of evaluation methods. This also highlights the specific evaluation needs for local operations. A methodology is then developed to complete and adapt the existing material to design evaluation methods for local operations, so that stakeholders can easily appropriate. Evaluation results thus feed a know-how building process with experience feedback. These methods are to meet two main goals: to determine the operation results, and to detect the success/failure factors. The methodology was validated on concrete cases, where these objectives were reached. (author)
Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas
2017-12-01
We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.
ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities
International Nuclear Information System (INIS)
Muir, D.W.
1989-01-01
File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities
How much do genetic covariances alter the rate of adaptation?
Agrawal, Aneil F; Stinchcombe, John R
2009-03-22
Genetically correlated traits do not evolve independently, and the covariances between traits affect the rate at which a population adapts to a specified selection regime. To measure the impact of genetic covariances on the rate of adaptation, we compare the rate fitness increases given the observed G matrix to the expected rate if all the covariances in the G matrix are set to zero. Using data from the literature, we estimate the effect of genetic covariances in real populations. We find no net tendency for covariances to constrain the rate of adaptation, though the quality and heterogeneity of the data limit the certainty of this result. There are some examples in which covariances strongly constrain the rate of adaptation but these are balanced by counter examples in which covariances facilitate the rate of adaptation; in many cases, covariances have little or no effect. We also discuss how our metric can be used to identify traits or suites of traits whose genetic covariances to other traits have a particularly large impact on the rate of adaptation.
Energy Technology Data Exchange (ETDEWEB)
Bourget, Antoine; Troost, Jan [Laboratoire de Physique Théorique, École Normale Supérieure, 24 rue Lhomond, 75005 Paris (France)
2016-03-23
We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N=(4,4) supersymmetry in two dimensions. For seed target spaces K3 and T{sup 4}, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.
Impact of the 235U covariance data in benchmark calculations
International Nuclear Information System (INIS)
Leal, Luiz; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes' method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235 U. The resulting 235 U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235 U covariance data in calculations of critical benchmark systems. (authors)
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
Energy Technology Data Exchange (ETDEWEB)
Broc, J S
2006-12-15
Energy end-use Efficiency (EE) is a priority for energy policies to face resources exhaustion and to reduce pollutant emissions. At the same time, in France, local level is increasingly involved into the implementation of EE activities, whose frame is changing (energy market liberalization, new policy instruments). Needs for ex-post evaluation of the local EE activities are thus increasing, for regulation requirements and to support a necessary change of scale. Our thesis focuses on the original issue of the ex-post evaluation of local EE operations in France. The state of the art, through the analysis of the American and European experiences and of the reference guidebooks, gives a substantial methodological material and emphasises the key evaluation issues. Concurrently, local EE operations in France are characterized by an analysis of their environment and a work on their segmentation criteria. The combination of these criteria with the key evaluation issues provides an analysis framework used as the basis for the composition of evaluation methods. This also highlights the specific evaluation needs for local operations. A methodology is then developed to complete and adapt the existing material to design evaluation methods for local operations, so that stakeholders can easily appropriate. Evaluation results thus feed a know-how building process with experience feedback. These methods are to meet two main goals: to determine the operation results, and to detect the success/failure factors. The methodology was validated on concrete cases, where these objectives were reached. (author)
Energy Technology Data Exchange (ETDEWEB)
Broc, J.S
2006-12-15
Energy end-use Efficiency (EE) is a priority for energy policies to face resources exhaustion and to reduce pollutant emissions. At the same time, in France, local level is increasingly involved into the implementation of EE activities, whose frame is changing (energy market liberalization, new policy instruments). Needs for ex-post evaluation of the local EE activities are thus increasing, for regulation requirements and to support a necessary change of scale. Our thesis focuses on the original issue of the ex-post evaluation of local EE operations in France. The state of the art, through the analysis of the American and European experiences and of the reference guidebooks, gives a substantial methodological material and emphasises the key evaluation issues. Concurrently, local EE operations in France are characterized by an analysis of their environment and a work on their segmentation criteria. The combination of these criteria with the key evaluation issues provides an analysis framework used as the basis for the composition of evaluation methods. This also highlights the specific evaluation needs for local operations. A methodology is then developed to complete and adapt the existing material to design evaluation methods for local operations, so that stakeholders can easily appropriate. Evaluation results thus feed a know-how building process with experience feedback. These methods are to meet two main goals: to determine the operation results, and to detect the success/failure factors. The methodology was validated on concrete cases, where these objectives were reached. (author)
On covariance structure in noisy, big data
Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.
2013-09-01
Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.; Kleiber, William
2015-01-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Fast covariance estimation for innovations computed from a spatial Gibbs point process
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Rubak, Ege
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...
Covariant field equations in supergravity
Energy Technology Data Exchange (ETDEWEB)
Vanhecke, Bram [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium); Ghent University, Faculty of Physics, Gent (Belgium); Proeyen, Antoine van [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium)
2017-12-15
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Covariant field equations in supergravity
International Nuclear Information System (INIS)
Vanhecke, Bram; Proeyen, Antoine van
2017-01-01
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Generally covariant gauge theories
International Nuclear Information System (INIS)
Capovilla, R.
1992-01-01
A new class of generally covariant gauge theories in four space-time dimensions is investigated. The field variables are taken to be a Lie algebra valued connection 1-form and a scalar density. Modulo an important degeneracy, complex [euclidean] vacuum general relativity corresponds to a special case in this class. A canonical analysis of the generally covariant gauge theories with the same gauge group as general relativity shows that they describe two degrees of freedom per space point, qualifying therefore as a new set of neighbors of general relativity. The modification of the algebra of the constraints with respect to the general relativity case is computed; this is used in addressing the question of how general relativity stands out from its neighbors. (orig.)
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.
2016-01-01
This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator
Lorentz Covariance of Langevin Equation
International Nuclear Information System (INIS)
Koide, T.; Denicol, G.S.; Kodama, T.
2008-01-01
Relativistic covariance of a Langevin type equation is discussed. The requirement of Lorentz invariance generates an entanglement between the force and noise terms so that the noise itself should not be a covariant quantity. (author)
Video based object representation and classification using multiple covariance matrices.
Zhang, Yurong; Liu, Quan
2017-01-01
Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
Distance covariance for stochastic processes
DEFF Research Database (Denmark)
Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady
2017-01-01
The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...
Covariance fitting of highly-correlated data in lattice QCD
Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong
2013-07-01
We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.
More on Estimation of Banded and Banded Toeplitz Covariance Matrices
Berntsson, Fredrik; Ohlson, Martin
2017-01-01
In this paper we consider two different linear covariance structures, e.g., banded and bended Toeplitz, and how to estimate them using different methods, e.g., by minimizing different norms. One way to estimate the parameters in a linear covariance structure is to use tapering, which has been shown to be the solution to a universal least squares problem. We know that tapering not always guarantee the positive definite constraints on the estimated covariance matrix and may not be a suitable me...
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Covariant holography of a tachyonic accelerating universe
Energy Technology Data Exchange (ETDEWEB)
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
On spectral distribution of high dimensional covariation matrices
DEFF Research Database (Denmark)
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
International Nuclear Information System (INIS)
Ginelli, Francesco; Politi, Antonio; Chaté, Hugues; Livi, Roberto
2013-01-01
Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi–Pasta–Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’. (paper)
Networks of myelin covariance.
Melie-Garcia, Lester; Slater, David; Ruef, Anne; Sanabria-Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine
2018-04-01
Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, ). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these "networks of myelin covariance" (Myelin-Nets). The Myelin-Nets were built from quantitative Magnetization Transfer data-an in-vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin-Nets. We therefore selected two age groups: Young-Age (20-31 years old) and Old-Age (60-71 years old) and a pool of participants from 48 to 87 years old for a Myelin-Nets aging trajectory study. We found that the topological organization of the Myelin-Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin-Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Non-evaluation applications for covariance matrices
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.
1982-05-01
The possibility for application of covariance matrix techniques to a variety of common research problems other than formal data evaluation are demonstrated by means of several examples. These examples deal with such matters as fitting spectral data, deriving uncertainty estimates for results calculated from experimental data, obtaining the best values for plurally-measured quantities, and methods for analysis of cross section errors based on properties of the experiment. The examples deal with realistic situations encountered in the laboratory, and they are treated in sufficient detail to enable a careful reader to extrapolate the methods to related problems.
General Galilei Covariant Gaussian Maps
Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo
2017-09-01
We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].
Fast Computing for Distance Covariance
Huo, Xiaoming; Szekely, Gabor J.
2014-01-01
Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...
A three domain covariance framework for EEG/MEG data.
Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C
2015-10-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.
Bayesian source term determination with unknown covariance of measurements
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Covariant electromagnetic field lines
Hadad, Y.; Cohen, E.; Kaminer, I.; Elitzur, A. C.
2017-08-01
Faraday introduced electric field lines as a powerful tool for understanding the electric force, and these field lines are still used today in classrooms and textbooks teaching the basics of electromagnetism within the electrostatic limit. However, despite attempts at generalizing this concept beyond the electrostatic limit, such a fully relativistic field line theory still appears to be missing. In this work, we propose such a theory and define covariant electromagnetic field lines that naturally extend electric field lines to relativistic systems and general electromagnetic fields. We derive a closed-form formula for the field lines curvature in the vicinity of a charge, and show that it is related to the world line of the charge. This demonstrates how the kinematics of a charge can be derived from the geometry of the electromagnetic field lines. Such a theory may also provide new tools in modeling and analyzing electromagnetic phenomena, and may entail new insights regarding long-standing problems such as radiation-reaction and self-force. In particular, the electromagnetic field lines curvature has the attractive property of being non-singular everywhere, thus eliminating all self-field singularities without using renormalization techniques.
A New Approach for Nuclear Data Covariance and Sensitivity Generation
International Nuclear Information System (INIS)
Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.
2005-01-01
Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes
SG39 Deliverables. Comments on Covariance Data
International Nuclear Information System (INIS)
Yokoyama, Kenji
2015-01-01
The covariance matrix of a scattered data set, x_i (i=1,n), must be symmetric and positive-definite. As one of WPEC/SG39 contributions to the SG40/CIELO project, several comments or recommendations on the covariance data are described here from the viewpoint of nuclear-data users. To make the comments concrete and useful for nuclear-data evaluators, the covariance data of the latest evaluated nuclear data library, JENDL-4.0 and ENDF/B-VII.1 are treated here as the representative materials. The surveyed nuclides are five isotopes that are most important for fast reactor application. The nuclides, reactions and energy regions dealt with are followings: Pu-239: fission (2.5∼10 keV) and capture (2.5∼10 keV), U-235: fission (500 eV∼10 keV) and capture (500 eV∼30 keV), U-238: fission (1∼10 MeV), capture (below 20 keV, 20∼150 keV), inelastic (above 100 keV) and elastic (above 20 keV), Fe-56: elastic (below 850 keV) and average scattering cosine (above 10 keV), and, Na-23: capture (600 eV∼600 keV), inelastic (above 1 MeV) and elastic (around 2 keV)
Covariation in Natural Causal Induction.
Cheng, Patricia W.; Novick, Laura R.
1991-01-01
Biases and models usually offered by cognitive and social psychology and by philosophy to explain causal induction are evaluated with respect to focal sets (contextually determined sets of events over which covariation is computed). A probabilistic contrast model is proposed as underlying covariation computation in natural causal induction. (SLD)
Modelling the Covariance Structure in Marginal Multivariate Count Models
DEFF Research Database (Denmark)
Bonat, W. H.; Olivero, J.; Grande-Vega, M.
2017-01-01
The main goal of this article is to present a flexible statistical modelling framework to deal with multivariate count data along with longitudinal and repeated measures structures. The covariance structure for each response variable is defined in terms of a covariance link function combined...... be used to indicate whether there was statistical evidence of a decline in blue duikers and other species hunted during the study period. Determining whether observed drops in the number of animals hunted are indeed true is crucial to assess whether species depletion effects are taking place in exploited...... with a matrix linear predictor involving known matrices. In order to specify the joint covariance matrix for the multivariate response vector, the generalized Kronecker product is employed. We take into account the count nature of the data by means of the power dispersion function associated with the Poisson...
Sparse reduced-rank regression with covariance estimation
Chen, Lisha
2014-12-08
Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.
Sparse reduced-rank regression with covariance estimation
Chen, Lisha; Huang, Jianhua Z.
2014-01-01
Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.
Modeling the Conditional Covariance between Stock and Bond Returns
P. de Goeij (Peter); W.A. Marquering (Wessel)
2002-01-01
textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for
Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints
DEFF Research Database (Denmark)
Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt
2014-01-01
In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Galaxy–galaxy lensing estimators and their covariance properties
International Nuclear Information System (INIS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez
2017-01-01
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Slater, David; Ruef, Anne; Sanabria‐Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine
2017-01-01
Abstract Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, 2013). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these “networks of myelin covariance” (Myelin‐Nets). The Myelin‐Nets were built from quantitative Magnetization Transfer data—an in‐vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin‐Nets. We therefore selected two age groups: Young‐Age (20–31 years old) and Old‐Age (60–71 years old) and a pool of participants from 48 to 87 years old for a Myelin‐Nets aging trajectory study. We found that the topological organization of the Myelin‐Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin‐Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. PMID:29271053
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Covariance Manipulation for Conjunction Assessment
Hejduk, M. D.
2016-01-01
The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.
Evaluation and processing of covariance data
International Nuclear Information System (INIS)
Wagner, M.
1993-01-01
These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)
Ex post damage assessment: an Italian experience
Molinari, D.; Menoni, S.; Aronica, G. T.; Ballio, F.; Berni, N.; Pandolfo, C.; Stelluti, M.; Minucci, G.
2014-04-01
In recent years, awareness of a need for more effective disaster data collection, storage, and sharing of analyses has developed in many parts of the world. In line with this advance, Italian local authorities have expressed the need for enhanced methods and procedures for post-event damage assessment in order to obtain data that can serve numerous purposes: to create a reliable and consistent database on the basis of which damage models can be defined or validated; and to supply a comprehensive scenario of flooding impacts according to which priorities can be identified during the emergency and recovery phase, and the compensation due to citizens from insurers or local authorities can be established. This paper studies this context, and describes ongoing activities in the Umbria and Sicily regions of Italy intended to identifying new tools and procedures for flood damage data surveys and storage in the aftermath of floods. In the first part of the paper, the current procedures for data gathering in Italy are analysed. The analysis shows that the available knowledge does not enable the definition or validation of damage curves, as information is poor, fragmented, and inconsistent. A new procedure for data collection and storage is therefore proposed. The entire analysis was carried out at a local level for the residential and commercial sectors only. The objective of the next steps for the research in the short term will be (i) to extend the procedure to other types of damage, and (ii) to make the procedure operational with the Italian Civil Protection system. The long-term aim is to develop specific depth-damage curves for Italian contexts.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
On Galilean covariant quantum mechanics
International Nuclear Information System (INIS)
Horzela, A.; Kapuscik, E.; Kempczynski, J.; Joint Inst. for Nuclear Research, Dubna
1991-08-01
Formalism exhibiting the Galilean covariance of wave mechanics is proposed. A new notion of quantum mechanical forces is introduced. The formalism is illustrated on the example of the harmonic oscillator. (author)
Matrix algebra for higher order moments
Meijer, Erik
2005-01-01
A large part of statistics is devoted to the estimation of models from the sample covariance matrix. The development of the statistical theory and estimators has been greatly facilitated by the introduction of special matrices, such as the commutation matrix and the duplication matrix, and the
Ellipsoids and matrix-valued valuations
Ludwig, Monika
2003-01-01
We obtain a classification of Borel measurable, GL(n) covariant, symmetric-matrix-valued valuations on the space of n-dimensional convex polytopes. The only ones turn out to be the moment matrix corresponding to the classical Legendre ellipsoid and the matrix corresponding to the ellipsoid recently discovered by E. Lutwak, D. Yang, and G. Zhang.
Visualization and assessment of spatio-temporal covariance properties
Huang, Huang
2017-11-23
Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
Updated Covariance Processing Capabilities in the AMPX Code System
International Nuclear Information System (INIS)
Wiarda, Dorothea; Dunn, Michael E.
2007-01-01
A concerted effort is in progress within the nuclear data community to provide new cross-section covariance data evaluations to support sensitivity/uncertainty analyses of fissionable systems. The objective of this work is to update processing capabilities of the AMPX library to process the latest Evaluated Nuclear Data File (ENDF)/B formats to generate covariance data libraries for radiation transport software such as SCALE. The module PUFF-IV was updated to allow processing of new ENDF covariance formats in the resolved resonance region. In the resolved resonance region, covariance matrices are given in terms of resonance parameters, which need to be processed into covariance matrices with respect to the group-averaged cross-section data. The parameter covariance matrix can be quite large if the evaluation has many resonances. The PUFF-IV code has recently been used to process an evaluation of 235U, which was prepared in collaboration between Oak Ridge National Laboratory and Los Alamos National Laboratory.
Ocean Spectral Data Assimilation Without Background Error Covariance Matrix
2016-01-01
problems: Basic concenptual framework 580 and some open questions. J Meteor Soc Japan, 75, 257-288. 581 582 Evensen G (2003) The ensemble Kalman filter ...160. 588 589 Galanis GN, Louka P, Katsafados Kallos PG, Pytharoulis I (2006) Applications of Kalman 590 filters based on non-linear functions...reduction using the OSD is evident in comparison to the OI scheme. Synoptic monthly gridded 27 world ocean temperature , salinity, and absolute
Neutron cross section and covariance data evaluation of experimental data for {sup 27}Al
Energy Technology Data Exchange (ETDEWEB)
Chunjuan, Li; Jianfeng, Liu [Physics Department , Zhengzhou Univ., Zhengzhou (China); Tingjin, Liu [China Nuclear Data Center, China Inst. of Atomic Energy, Beijing (China)
2006-07-15
The evaluation of neutron cross section and covariance data for {sup 27}Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for {sup 27}Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)
Neutron cross section and covariance data evaluation of experimental data for 27Al
International Nuclear Information System (INIS)
Li Chunjuan; Liu Jianfeng; Liu Tingjin
2006-01-01
The evaluation of neutron cross section and covariance data for 27 Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for 27 Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)
Generation of phase-covariant quantum cloning
International Nuclear Information System (INIS)
Karimipour, V.; Rezakhani, A.T.
2002-01-01
It is known that in phase-covariant quantum cloning, the equatorial states on the Bloch sphere can be cloned with a fidelity higher than the optimal bound established for universal quantum cloning. We generalize this concept to include other states on the Bloch sphere with a definite z component of spin. It is shown that once we know the z component, we can always clone a state with a fidelity higher than the universal value and that of equatorial states. We also make a detailed study of the entanglement properties of the output copies and show that the equatorial states are the only states that give rise to a separable density matrix for the outputs
Directory of Open Access Journals (Sweden)
I PUTU EKA IRAWAN
2013-11-01
Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.
GLq(N)-covariant quantum algebras and covariant differential calculus
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1992-01-01
GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations are considered. It is that, up to some innessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. 25 refs
GLq(N)-covariant quantum algebras and covariant differential calculus
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1993-01-01
We consider GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations. We show that, up to some inessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. The connection with the bicovariant differential calculus on the linear quantum groups is discussed. (orig.)
A class of covariate-dependent spatiotemporal covariance functions
Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.
2014-01-01
In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199
Cosmic censorship conjecture revisited: covariantly
International Nuclear Information System (INIS)
Hamid, Aymen I M; Goswami, Rituparno; Maharaj, Sunil D
2014-01-01
In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general locally rotationally symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible. (paper)
Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation
Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.
2018-01-01
Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.
Bayesian estimation of covariance matrices: Application to market risk management at EDF
International Nuclear Information System (INIS)
Jandrzejewski-Bouriga, M.
2012-01-01
In this thesis, we develop new methods of regularized covariance matrix estimation, under the Bayesian setting. The regularization methodology employed is first related to shrinkage. We investigate a new Bayesian modeling of covariance matrix, based on hierarchical inverse-Wishart distribution, and then derive different estimators under standard loss functions. Comparisons between shrunk and empirical estimators are performed in terms of frequentist performance under different losses. It allows us to highlight the critical importance of the definition of cost function and show the persistent effect of the shrinkage-type prior on inference. In a second time, we consider the problem of covariance matrix estimation in Gaussian graphical models. If the issue is well treated for the decomposable case, it is not the case if you also consider non-decomposable graphs. We then describe a Bayesian and operational methodology to carry out the estimation of covariance matrix of Gaussian graphical models, decomposable or not. This procedure is based on a new and objective method of graphical-model selection, combined with a constrained and regularized estimation of the covariance matrix of the model chosen. The procedures studied effectively manage missing data. These estimation techniques were applied to calculate the covariance matrices involved in the market risk management for portfolios of EDF (Electricity of France), in particular for problems of calculating Value-at-Risk or in Asset Liability Management. (author)
MIMO-radar Waveform Covariance Matrices for High SINR and Low Side-lobe Levels
Ahmed, Sajid
2012-12-29
MIMO-radar has better parametric identifiability but compared to phased-array radar it shows loss in signal-to-noise ratio due to non-coherent processing. To exploit the benefits of both MIMO-radar and phased-array two transmit covariance matrices are found. Both of the covariance matrices yield gain in signal-to-interference-plus-noise ratio (SINR) compared to MIMO-radar and have lower side-lobe levels (SLL)\\'s compared to phased-array and MIMO-radar. Moreover, in contrast to recently introduced phased-MIMO scheme, where each antenna transmit different power, our proposed schemes allows same power transmission from each antenna. The SLL\\'s of the proposed first covariance matrix are higher than the phased-MIMO scheme while the SLL\\'s of the second proposed covariance matrix are lower than the phased-MIMO scheme. The first covariance matrix is generated using an auto-regressive process, which allow us to change the SINR and side lobe levels by changing the auto-regressive parameter, while to generate the second covariance matrix the values of sine function between 0 and $\\\\pi$ with the step size of $\\\\pi/n_T$ are used to form a positive-semidefinite Toeplitiz matrix, where $n_T$ is the number of transmit antennas. Simulation results validate our analytical results.
Hamiltonian formalism, quantization and S matrix for supergravity. [S matrix, canonical constraints
Energy Technology Data Exchange (ETDEWEB)
Fradkin, E S; Vasiliev, M A [AN SSSR, Moscow. Fizicheskij Inst.
1977-12-05
The canonical formalism for supergravity is constructed. The algebra of canonical constraints is found. The correct expression for the S matrix is obtained. Usual 'covariant methods' lead to an incorrect S matrix in supergravity since a new four-particle interaction of ghostfields survives in the Lagrangian expression of the S matrix.
Extended covariance data formats for the ENDF/B-VI differential data evaluation
International Nuclear Information System (INIS)
Peelle, R.W.; Muir, D.W.
1988-01-01
The ENDF/B-V included cross section covariance data, but covariances could not be encoded for all the important data types. New ENDF-6 covariance formats are outlined including those for cross-file (MF) covariances, resonance parameters over the whole range, and secondary energy and angle distributions. One ''late entry'' format encodes covariance data for cross sections that are output from model or fitting codes in terms of the model parameter covariance matrix and the tabulated derivatives of cross sections with respect to the model parameters. Another new format yields multigroup cross section variances that increase as the group width decreases. When evaluators use the new formats, the files can be processed and used for improved uncertainty propagation and data combination. 22 refs
AFCI-2.0 Neutron Cross Section Covariance Library
Energy Technology Data Exchange (ETDEWEB)
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
AFCI-2.0 Neutron Cross Section Covariance Library
International Nuclear Information System (INIS)
Herman, M.; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-01-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R and D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78
Covariant Gauss law commutator anomaly
International Nuclear Information System (INIS)
Dunne, G.V.; Trugenberger, C.A.; Massachusetts Inst. of Tech., Cambridge
1990-01-01
Using a (fixed-time) hamiltonian formalism we derive a covariant form for the anomaly in the commutator algebra of Gauss law generators for chiral fermions interacting with a dynamical non-abelian gauge field in 3+1 dimensions. (orig.)
Covariant gauges for constrained systems
International Nuclear Information System (INIS)
Gogilidze, S.A.; Khvedelidze, A.M.; Pervushin, V.N.
1995-01-01
The method of constructing of extended phase space for singular theories which permits the consideration of covariant gauges without the introducing of a ghost fields, is proposed. The extension of the phase space is carried out by the identification of the initial theory with an equivalent theory with higher derivatives and applying to it the Ostrogradsky method of Hamiltonian description. 7 refs
Uncertainty covariances in robotics applications
International Nuclear Information System (INIS)
Smith, D.L.
1984-01-01
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Criteria of the validation of experimental and evaluated covariance data
International Nuclear Information System (INIS)
Badikov, S.
2008-01-01
The criteria of the validation of experimental and evaluated covariance data are reviewed. In particular: a) the criterion of the positive definiteness for covariance matrices, b) the relationship between the 'integral' experimental and estimated uncertainties, c) the validity of the statistical invariants, d) the restrictions imposed to correlations between experimental errors, are described. Applying these criteria in nuclear data evaluation was considered and 4 particular points have been examined. First preserving positive definiteness of covariance matrices in case of arbitrary transformation of a random vector was considered, properties of the covariance matrices in operations widely used in neutron and reactor physics (splitting and collapsing energy groups, averaging the physical values over energy groups, estimation parameters on the basis of measurements by means of generalized least squares method) were studied. Secondly, an algorithm for comparison of experimental and estimated 'integral' uncertainties was developed, square root of determinant of a covariance matrix is recommended for use in nuclear data evaluation as a measure of 'integral' uncertainty for vectors of experimental and estimated values. Thirdly, a set of statistical invariants-values which are preserved in statistical processing was presented. And fourthly, the inequality that signals a correlation between experimental errors that leads to unphysical values is given. An application is given concerning the cross-section of the (n,t) reaction on Li 6 with a neutron incident energy comprised between 1 and 100 keV
Bayes Factor Covariance Testing in Item Response Models.
Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip
2017-12-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Do Time-Varying Covariances, Volatility Comovement and Spillover Matter?
Lakshmi Balasubramanyan
2005-01-01
Financial markets and their respective assets are so intertwined; analyzing any single market in isolation ignores important information. We investigate whether time varying volatility comovement and spillover impact the true variance-covariance matrix under a time-varying correlation set up. Statistically significant volatility spillover and comovement between US, UK and Japan is found. To demonstrate the importance of modelling volatility comovement and spillover, we look at a simple portfo...
Emergent gravity on covariant quantum spaces in the IKKT model
Energy Technology Data Exchange (ETDEWEB)
Steinacker, Harold C. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Vienna (Austria)
2016-12-30
We study perturbations of 4-dimensional fuzzy spheres as backgrounds in the IKKT or IIB matrix model. Gauge fields and metric fluctuations are identified among the excitation modes with lowest spin, supplemented by a tower of higher-spin fields. They arise from an internal structure which can be viewed as a twisted bundle over S{sup 4}, leading to a covariant noncommutative geometry. The linearized 4-dimensional Einstein equations are obtained from the classical matrix model action under certain conditions, modified by an IR cutoff. Some one-loop contributions to the effective action are computed using the formalism of string states.
Structure of Pioncare covariant tensor operators in quantum mechanical models
International Nuclear Information System (INIS)
Polyzou, W.N.; Klink, W.H.
1988-01-01
The structure of operators that transform covariantly in Poincare invariant quantum mechanical models is analyzed. These operators are shown to have an interaction dependence that comes from the geometry of the Poincare group. The operators can be expressed in terms of matrix elements in a complete set of eigenstates of the mass and spin operators associated with the dynamical representation of the Poincare group. The matrix elements are factored into geometrical coefficients (Clebsch--Gordan coefficients for the Poincare group) and invariant matrix elements. The geometrical coefficients are fixed by the transformation properties of the operator and the eigenvalue spectrum of the mass and spin. The invariant matrix elements, which distinguish between different operators with the same transformation properties, are given in terms of a set of invariant form factors. copyright 1988 Academic Press, Inc
A Small Guide to Generating Covariances of Experimental Data
International Nuclear Information System (INIS)
Mannhart, Wolf
2011-05-01
A complete description of the uncertainties of an experiment can only be realized by a detailed list of all the uncertainty components, their value and a specification of existing correlations between the data. Based on such information the covariance matrix can be generated, which is necessary for any further proceeding with the experimental data. It is not necessary, and not recommended, that an experimenter evaluates this covariance matrix. The reason for this is that a incorrectly evaluated final covariance matrix can never be corrected if the details are not given. (Such obviously wrong covariance matrices have recently occasionally been found in the literature). Hence quotation of a covariance matrix is an additional step which should not occur without quoting a detailed list of the various uncertainty components and their correlations as well. It must be hoped that editors of journals will understand these necessary requirements. The generalized least squares procedure shown permits an easy way of interchanging data D 0 with parameter estimates P. This means new data can easily be combined with an earlier evaluation. However, it must be mentioned that this is only valid as long as the new data have no correlation with any of the older data of the prior evaluation. Otherwise the old data which show correlation with new data have to be extracted from the evaluation and then, together with the new data and taking account of the correlation, have again to be added to the reduced evaluation. In most cases this step cannot be performed and the evaluation has to be completely redone. A partial way out is given if the evaluation is performed step by step and the results of each step are stored. Then the evaluation need only be repeated from the step which contains correlated data for the first time while all earlier steps remain unchanged. Finally it should be noted that the addition of a small set of new data to a prior evaluation consisting of a large number of
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
Energy Technology Data Exchange (ETDEWEB)
Sigeti, David Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, D. Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances do not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.
Recent Advances with the AMPX Covariance Processing Capabilities in PUFF-IV
International Nuclear Information System (INIS)
Wiarda, Dorothea; Arbanas, Goran; Leal, Luiz C.; Dunn, Michael E.
2008-01-01
The program PUFF-IV is used to process resonance parameter covariance information given in ENDF/B File 32 and point-wise covariance matrices given in ENDF/B File 33 into group-averaged covariances matrices on a user-supplied group structure. For large resonance covariance matrices, found for example in 235U, the execution time of PUFF-IV can be quite long. Recently the code was modified to take advandage of Basic Linear Algebra Subprograms (BLAS) routines for the most time-consuming matrix multiplications. This led to a substantial decrease in execution time. This faster processing capability allowed us to investigate the conversion of File 32 data into File 33 data using a larger number of user-defined groups. While conversion substantially reduces the ENDF/B file size requirements for evaluations with a large number of resonances, a trade-off is made between the number of groups used to represent the resonance parameter covariance as a point-wise covariance matrix and the file size. We are also investigating a hybrid version of the conversion, in which the low-energy part of the File 32 resonance parameter covariances matrix is retained and the correlations with higher energies as well as the high energy part are given in File 33.
Reconstruction of sparse connectivity in neural networks from spike train covariances
International Nuclear Information System (INIS)
Pernice, Volker; Rotter, Stefan
2013-01-01
The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L 1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous–irregular state, where spike train covariances are well described by a linear model. (paper)
Group covariance and metrical theory
International Nuclear Information System (INIS)
Halpern, L.
1983-01-01
The a priori introduction of a Lie group of transformations into a physical theory has often proved to be useful; it usually serves to describe special simplified conditions before a general theory can be worked out. Newton's assumptions of absolute space and time are examples where the Euclidian group and translation group have been introduced. These groups were extended to the Galilei group and modified in the special theory of relativity to the Poincare group to describe physics under the given conditions covariantly in the simplest way. The criticism of the a priori character leads to the formulation of the general theory of relativity. The general metric theory does not really give preference to a particular invariance group - even the principle of equivalence can be adapted to a whole family of groups. The physical laws covariantly inserted into the metric space are however adapted to the Poincare group. 8 references
Castrillon, Julio
2015-11-10
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.
Phenotypic covariance at species' borders.
Caley, M Julian; Cripps, Edward; Game, Edward T
2013-05-28
Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species' borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species' borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future.
Noisy covariance matrices and portfolio optimization II
Pafka, Szilárd; Kondor, Imre
2003-03-01
Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the
Energy Technology Data Exchange (ETDEWEB)
Aalbers, R.; Baarsma, B.; Berkhout, P.; Bremer, S.; Gerritsen, M.; De Nooij, M. [SEO Economisch Onderzoek, Amsterdam (Netherlands)
2007-07-15
In the framework of the VBTB ('Van Beleidsbegroting tot Beleidsverantwoording', or 'From budget to balance sheet') the EIA (Energy Investment Allowance) has been evaluated for the period 2001-2005. Attention has been paid to its relevancy and the (cost) effectiveness. [Dutch] In de context van de VBTB ('Van Beleidsbegroting tot Beleidsverantwoording', or 'From budget to balance sheet') is onderzocht of de EIA in de periode 2001 tot en met 2005 goed heeft gefunctioneerd. Daarbij is gekeken naar de relevantie van de EIA, de effectiviteit en kosteneffectiviteit van de EIA en is een evaluatie van de uitvoering gemaakt.
International Nuclear Information System (INIS)
Smith, D.L.
1988-01-01
The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurement standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs
Directory of Open Access Journals (Sweden)
James M. Cheverud
2007-03-01
Full Text Available Comparisons of covariance patterns are becoming more common as interest in the evolution of relationships between traits and in the evolutionary phenotypic diversification of clades have grown. We present parallel analyses of covariance matrix similarity for cranial traits in 14 New World Monkey genera using the Random Skewers (RS, T-statistics, and Common Principal Components (CPC approaches. We find that the CPC approach is very powerful in that with adequate sample sizes, it can be used to detect significant differences in matrix structure, even between matrices that are virtually identical in their evolutionary properties, as indicated by the RS results. We suggest that in many instances the assumption that population covariance matrices are identical be rejected out of hand. The more interesting and relevant question is, How similar are two covariance matrices with respect to their predicted evolutionary responses? This issue is addressed by the random skewers method described here.
Proofs of Contracted Length Non-covariance
International Nuclear Information System (INIS)
Strel'tsov, V.N.
1994-01-01
Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs
Structural Analysis of Covariance and Correlation Matrices.
Joreskog, Karl G.
1978-01-01
A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…
Error estimation for ADS nuclear properties by using nuclear data covariances
International Nuclear Information System (INIS)
Tsujimoto, Kazufumi
2005-01-01
Error for nuclear properties of accelerator-driven subcritical system by the uncertainties of nuclear data was performed. An uncertainty analysis was done using the sensitivity coefficients based on the generalized perturbation theory and the variance matrix data. For major actinides and structural material, the covariance data in JENDL-3.3 library were used. For MA, newly evaluated covariance data was used since there had been no reliable data in all libraries. (author)
Lorentz covariant theory of gravitation
International Nuclear Information System (INIS)
Fagundes, H.V.
1974-12-01
An alternative method for the calculation of second order effects, like the secular shift of Mercury's perihelium is developed. This method uses the basic ideas of thirring combined with the more mathematical approach of Feyman. In the case of a static source, the treatment used is greatly simplified. Besides, Einstein-Infeld-Hoffmann's Lagrangian for a system of two particles and spin-orbit and spin-spin interactions of two particles with classical spin, ie, internal angular momentum in Moller's sense, are obtained from the Lorentz covariant theory
International Nuclear Information System (INIS)
Sebestyen, A.
1975-07-01
The principle of covariance is extended to coordinates corresponding to internal degrees of freedom. The conditions for a system to be isolated are given. It is shown how internal forces arise in such systems. Equations for internal fields are derived. By an interpretation of the generalized coordinates based on group theory it is shown how particles of the ordinary sense enter into the model and as a simple application the gravitational interaction of two pointlike particles is considered and the shift of the perihelion is deduced. (Sz.Z.)
Covariant gauges at finite temperature
Landshoff, Peter V
1992-01-01
A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler to use than the conventional one.
Phenotypic Covariation and Morphological Diversification in the Ruminant Skull.
Haber, Annat
2016-05-01
Differences among clades in their diversification patterns result from a combination of extrinsic and intrinsic factors. In this study, I examined the role of intrinsic factors in the morphological diversification of ruminants, in general, and in the differences between bovids and cervids, in particular. Using skull morphology, which embodies many of the adaptations that distinguish bovids and cervids, I examined 132 of the 200 extant ruminant species. As a proxy for intrinsic constraints, I quantified different aspects of the phenotypic covariation structure within species and compared them with the among-species divergence patterns, using phylogenetic comparative methods. My results show that for most species, divergence is well aligned with their phenotypic covariance matrix and that those that are better aligned have diverged further away from their ancestor. Bovids have dispersed into a wider range of directions in morphospace than cervids, and their overall disparity is higher. This difference is best explained by the lower eccentricity of bovids' within-species covariance matrices. These results are consistent with the role of intrinsic constraints in determining amount, range, and direction of dispersion and demonstrate that intrinsic constraints can influence macroevolutionary patterns even as the covariance structure evolves.
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei
2017-11-08
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.
Covariance specification and estimation to improve top-down Green House Gas emission estimates
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.
2015-12-01
The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve
Covariance Evaluation Methodology for Neutron Cross Sections
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Siudzińska, Katarzyna; Chruściński, Dariusz
2018-03-01
In matrix algebras, we introduce a class of linear maps that are irreducibly covariant with respect to the finite group generated by the Weyl operators. In particular, we analyze the irreducibly covariant quantum channels, that is, the completely positive and trace-preserving linear maps. Interestingly, imposing additional symmetries leads to the so-called generalized Pauli channels, which were recently considered in the context of the non-Markovian quantum evolution. Finally, we provide examples of irreducibly covariant positive but not necessarily completely positive maps.
Poincare covariance and κ-Minkowski spacetime
International Nuclear Information System (INIS)
Dabrowski, Ludwik; Piacitelli, Gherardo
2011-01-01
A fully Poincare covariant model is constructed as an extension of the κ-Minkowski spacetime. Covariance is implemented by a unitary representation of the Poincare group, and thus complies with the original Wigner approach to quantum symmetries. This provides yet another example (besides the DFR model), where Poincare covariance is realised a la Wigner in the presence of two characteristic dimensionful parameters: the light speed and the Planck length. In other words, a Doubly Special Relativity (DSR) framework may well be realised without deforming the meaning of 'Poincare covariance'. -- Highlights: → We construct a 4d model of noncommuting coordinates (quantum spacetime). → The coordinates are fully covariant under the undeformed Poincare group. → Covariance a la Wigner holds in presence of two dimensionful parameters. → Hence we are not forced to deform covariance (e.g. as quantum groups). → The underlying κ-Minkowski model is unphysical; covariantisation does not cure this.
Directory of Open Access Journals (Sweden)
Tania Dehesh
2015-01-01
Full Text Available Background. Univariate meta-analysis (UM procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC, common correlation (CC, estimated correlation (EC, and multivariate multilevel correlation (MMC on the estimation bias, mean square error (MSE, and 95% probability coverage of the confidence interval (CI in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi
2015-01-01
Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C
2017-08-01
One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.
Dark matter statistics for large galaxy catalogs: power spectra and covariance matrices
Klypin, Anatoly; Prada, Francisco
2018-06-01
Large-scale surveys of galaxies require accurate theoretical predictions of the dark matter clustering for thousands of mock galaxy catalogs. We demonstrate that this goal can be achieve with the new Parallel Particle-Mesh (PM) N-body code GLAM at a very low computational cost. We run ˜22, 000 simulations with ˜2 billion particles that provide ˜1% accuracy of the dark matter power spectra P(k) for wave-numbers up to k ˜ 1hMpc-1. Using this large data-set we study the power spectrum covariance matrix. In contrast to many previous analytical and numerical results, we find that the covariance matrix normalised to the power spectrum C(k, k΄)/P(k)P(k΄) has a complex structure of non-diagonal components: an upturn at small k, followed by a minimum at k ≈ 0.1 - 0.2 hMpc-1, and a maximum at k ≈ 0.5 - 0.6 hMpc-1. The normalised covariance matrix strongly evolves with redshift: C(k, k΄)∝δα(t)P(k)P(k΄), where δ is the linear growth factor and α ≈ 1 - 1.25, which indicates that the covariance matrix depends on cosmological parameters. We also show that waves longer than 1h-1Gpc have very little impact on the power spectrum and covariance matrix. This significantly reduces the computational costs and complexity of theoretical predictions: relatively small volume ˜(1h-1Gpc)3 simulations capture the necessary properties of dark matter clustering statistics. As our results also indicate, achieving ˜1% errors in the covariance matrix for k < 0.50 hMpc-1 requires a resolution better than ɛ ˜ 0.5h-1Mpc.
Are Low-order Covariance Estimates Useful in Error Analyses?
Baker, D. F.; Schimel, D.
2005-12-01
Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a p-dimensional heavy-tailed time series when p converges to infinity together with the sample size n. We generalize the growth rates of p existing in the literature. Assuming a regular variation condition with tail index ... eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high dimension...
Evaluation of covariances for resolved resonance parameters of 235U, 238U, and 239Pu in JENDL-3.2
International Nuclear Information System (INIS)
Kawano, Toshihiko; Shibata, Keiichi
2003-02-01
Evaluation of covariances for resolved resonance parameters of 235 U, 238 U, and 239 Pu was carried out. Although a large number of resolved resonances are observed for major actinides, uncertainties in averaged cross sections are more important than those in resonance parameters in reactor calculations. We developed a simple method which derives a covariance matrix for the resolved resonance parameters from uncertainties in the averaged cross sections. The method was adopted to evaluate the covariance data for some important actinides, and the results were compiled in the JENDL-3.2 covariance file. (author)
Characteristic Polynomials of Sample Covariance Matrices: The Non-Square Case
Kösters, Holger
2009-01-01
We consider the sample covariance matrices of large data matrices which have i.i.d. complex matrix entries and which are non-square in the sense that the difference between the number of rows and the number of columns tends to infinity. We show that the second-order correlation function of the characteristic polynomial of the sample covariance matrix is asymptotically given by the sine kernel in the bulk of the spectrum and by the Airy kernel at the edge of the spectrum. Similar results are g...
Institute of Scientific and Technical Information of China (English)
Xiaogu ZHENG
2009-01-01
An adaptive estimation of forecast error covariance matrices is proposed for Kalman filtering data assimilation. A forecast error covariance matrix is initially estimated using an ensemble of perturbation forecasts. This initially estimated matrix is then adjusted with scale parameters that are adaptively estimated by minimizing -2log-likelihood of observed-minus-forecast residuals. The proposed approach could be applied to Kalman filtering data assimilation with imperfect models when the model error statistics are not known. A simple nonlinear model (Burgers' equation model) is used to demonstrate the efficacy of the proposed approach.
Non-Critical Covariant Superstrings
Grassi, P A
2005-01-01
We construct a covariant description of non-critical superstrings in even dimensions. We construct explicitly supersymmetric hybrid type variables in a linear dilaton background, and study an underlying N=2 twisted superconformal algebra structure. We find similarities between non-critical superstrings in 2n+2 dimensions and critical superstrings compactified on CY_(4-n) manifolds. We study the spectrum of the non-critical strings, and in particular the Ramond-Ramond massless fields. We use the supersymmetric variables to construct the non-critical superstrings sigma-model action in curved target space backgrounds with coupling to the Ramond-Ramond fields. We consider as an example non-critical type IIA strings on AdS_2 background with Ramond-Ramond 2-form flux.
Super-sample covariance approximations and partial sky coverage
Lacasa, Fabien; Lima, Marcos; Aguena, Michel
2018-04-01
Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30-35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
ISSUES IN NEUTRON CROSS SECTION COVARIANCES
Energy Technology Data Exchange (ETDEWEB)
Mattoon, C.M.; Oblozinsky,P.
2010-04-30
We review neutron cross section covariances in both the resonance and fast neutron regions with the goal to identify existing issues in evaluation methods and their impact on covariances. We also outline ideas for suitable covariance quality assurance procedures.We show that the topic of covariance data remains controversial, the evaluation methodologies are not fully established and covariances produced by different approaches have unacceptable spread. The main controversy is in very low uncertainties generated by rigorous evaluation methods and much larger uncertainties based on simple estimates from experimental data. Since the evaluators tend to trust the former, while the users tend to trust the latter, this controversy has considerable practical implications. Dedicated effort is needed to arrive at covariance evaluation methods that would resolve this issue and produce results accepted internationally both by evaluators and users.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Center for Theoretical Physics (MCTP), University of Michigan,450 Church Street, Ann Arbor, MI 48109 (United States); Deutsches Elektronen-Synchrotron (DESY),Notkestraße 85, 22607 Hamburg (Germany)
2017-05-30
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
International Nuclear Information System (INIS)
Zhang, Zhengkang
2017-01-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Improvement of covariance data for fast reactors
International Nuclear Information System (INIS)
Shibata, Keiichi; Hasegawa, Akira
2000-02-01
We estimated covariances of the JENDL-3.2 data on the nuclides and reactions needed to analyze fast-reactor cores for the past three years, and produced covariance files. The present work was undertaken to re-examine the covariance files and to make some improvements. The covariances improved are the ones for the inelastic scattering cross section of 16 O, the total cross section of 23 Na, the fission cross section of 235 U, the capture cross section of 238 U, and the resolved resonance parameters for 238 U. Moreover, the covariances of 233 U data were newly estimated by the present work. The covariances obtained were compiled in the ENDF-6 format. (author)
Franklin, Joel N
2003-01-01
Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.
Sparse inverse covariance estimation with the graphical lasso.
Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert
2008-07-01
We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.
Neutron spectrum adjustment. The role of covariances
International Nuclear Information System (INIS)
Remec, I.
1992-01-01
Neutron spectrum adjustment method is shortly reviewed. Practical example dealing with power reactor pressure vessel exposure rates determination is analysed. Adjusted exposure rates are found only slightly affected by the covariances of measured reaction rates and activation cross sections, while the multigroup spectra covariances were found important. Approximate spectra covariance matrices, as suggested in Astm E944-89, were found useful but care is advised if they are applied in adjustments of spectra at locations without dosimetry. (author) [sl
Modifications of Sp(2) covariant superfield quantization
Energy Technology Data Exchange (ETDEWEB)
Gitman, D.M.; Moshin, P.Yu
2003-12-04
We propose a modification of the Sp(2) covariant superfield quantization to realize a superalgebra of generating operators isomorphic to the massless limit of the corresponding superalgebra of the osp(1,2) covariant formalism. The modified scheme ensures the compatibility of the superalgebra of generating operators with extended BRST symmetry without imposing restrictions eliminating superfield components from the quantum action. The formalism coincides with the Sp(2) covariant superfield scheme and with the massless limit of the osp(1,2) covariant quantization in particular cases of gauge-fixing and solutions of the quantum master equations.
Competing risks and time-dependent covariates
DEFF Research Database (Denmark)
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates...
Activities of covariance utilization working group
International Nuclear Information System (INIS)
Tsujimoto, Kazufumi
2013-01-01
During the past decade, there has been a interest in the calculational uncertainties induced by nuclear data uncertainties in the neutronics design of advanced nuclear system. The covariance nuclear data is absolutely essential for the uncertainty analysis. In the latest version of JENDL, JENDL-4.0, the covariance data for many nuclides, especially actinide nuclides, was substantialy enhanced. The growing interest in the uncertainty analysis and the covariance data has led to the organisation of the working group for covariance utilization under the JENDL committee. (author)
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
General covariance and quantum theory
International Nuclear Information System (INIS)
Mashhoon, B.
1986-01-01
The extension of the principle of relativity to general coordinate systems is based on the hypothesis that an accelerated observer is locally equivalent to a hypothetical inertial observer with the same velocity as the noninertial observer. This hypothesis of locality is expected to be valid for classical particle phenomena as well as for classical wave phenomena but only in the short-wavelength approximation. The generally covariant theory is therefore expected to be in conflict with the quantum theory which is based on wave-particle duality. This is explicitly demonstrated for the frequency of electromagnetic radiation measured by a uniformly rotating observer. The standard Doppler formula is shown to be valid only in the geometric optics approximation. A new definition for the frequency is proposed, and the resulting formula for the frequency measured by the rotating observer is shown to be consistent with expectations based on the classical theory of electrons. A tentative quantum theory is developed on the basis of the generalization of the Bohr frequency condition to include accelerated observers. The description of the causal sequence of events is assumed to be independent of the motion of the observer. Furthermore, the quantum hypothesis is supposed to be valid for all observers. The implications of this theory are critically examined. The new formula for frequency, which is still based on the hypothesis of locality, leads to the observation of negative energy quanta by the rotating observer and is therefore in conflict with the quantum theory
Statistical mechanics of learning orthogonal signals for general covariance models
International Nuclear Information System (INIS)
Hoyle, David C
2010-01-01
Statistical mechanics techniques have proved to be useful tools in quantifying the accuracy with which signal vectors are extracted from experimental data. However, analysis has previously been limited to specific model forms for the population covariance C, which may be inappropriate for real world data sets. In this paper we obtain new statistical mechanical results for a general population covariance matrix C. For data sets consisting of p sample points in R N we use the replica method to study the accuracy of orthogonal signal vectors estimated from the sample data. In the asymptotic limit of N,p→∞ at fixed α = p/N, we derive analytical results for the signal direction learning curves. In the asymptotic limit the learning curves follow a single universal form, each displaying a retarded learning transition. An explicit formula for the location of the retarded learning transition is obtained and we find marked variation in the location of the retarded learning transition dependent on the distribution of population covariance eigenvalues. The results of the replica analysis are confirmed against simulation
Parameters of the covariance function of galaxies
International Nuclear Information System (INIS)
Fesenko, B.I.; Onuchina, E.V.
1988-01-01
The two-point angular covariance functions for two samples of galaxies are considered using quick methods of analysis. It is concluded that in the previous investigations the amplitude of the covariance function in the Lick counts was overestimated and the rate of decrease of the function underestimated
Covariance Function for Nearshore Wave Assimilation Systems
2018-01-30
which is applicable for any spectral wave model. The four dimensional variational (4DVar) assimilation methods are based on the mathematical ...covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications , the covariance function depends primarily on...SPECTRAL ACTION DENSITY, RESPECTIVELY. ............................ 5 FIGURE 2. TOP ROW: STATISTICAL ANALYSIS OF THE WAVE-FIELD PROPERTIES AT THE
Treatment Effects with Many Covariates and Heteroskedasticity
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...
ERRORJ, Multigroup covariance matrices generation from ENDF-6 format
International Nuclear Information System (INIS)
Chiba, Go
2007-01-01
1 - Description of program or function: ERRORJ produces multigroup covariance matrices from ENDF-6 format following mainly the methods of the ERRORR module in NJOY94.105. New version differs from previous version in the following features: Additional features in ERRORJ with respect to the NJOY94.105/ERRORR module: - expands processing for the covariance matrices of resolved and unresolved resonance parameters; - processes average cosine of scattering angle and fission spectrum; - treats cross-correlation between different materials and reactions; - accepts input of multigroup constants with various forms (user input, GENDF, etc.); - outputs files with various formats through utility NJOYCOVX (COVERX format, correlation matrix, relative error and standard deviation); - uses a 1% sensitivity method for processing of resonance parameters; - ERRORJ can process the JENDL-3.2 and 3.3 covariance matrices. Additional features of the version 2 with respect to the previous version of ERRORJ: - Since the release of version 2, ERRORJ has been modified to increase its reliability and stability, - calculation of the correlation coefficients in the resonance region, - Option for high-speed calculation is implemented, - Perturbation amount is optimised in a sensitivity calculation, - Effect of the resonance self-shielding can be considered, - a compact covariance format (LCOMP=2) proposed by N. M. Larson can be read. Additional features of the version 2.2.1 with respect to the previous version of ERRORJ: - Several routines were modified to reduce calculation time. The new one needs shorter calculation time (50-70%) than the old version without changing results. - In the U-233 and Pu-241 files of JENDL-3.3 an inconsistency between resonance parameters in MF=32 and those in MF=2 was corrected. NEA-1676/06: This version differs from the previous one (NEA-1676/05) in the following: ERRORJ2.2.1 was modified to treat the self-shielding effect accurately. NEA-1676/07: This version
On the algebraic structure of covariant anomalies and covariant Schwinger terms
International Nuclear Information System (INIS)
Kelnhofer, G.
1992-01-01
A cohomological characterization of covariant anomalies and covariant Schwinger terms in an anomalous Yang-Mills theory is formulated and w ill be geometrically interpreted. The BRS and anti-BRS transformations are defined as purely differential geometric objects. Finally the covariant descent equations are formulated within this context. (author)
Covariant diagrams for one-loop matching
International Nuclear Information System (INIS)
Zhang, Zhengkang
2016-10-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariance descriptor fusion for target detection
Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih
2016-05-01
Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.
PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices
International Nuclear Information System (INIS)
Dunn, M.E.
2000-01-01
PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI
Maximum a posteriori covariance estimation using a power inverse wishart prior
DEFF Research Database (Denmark)
Nielsen, Søren Feodor; Sporring, Jon
2012-01-01
The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...
Might "Unique" Factors Be "Common"? On the Possibility of Indeterminate Common-Unique Covariances
Grayson, Dave
2006-01-01
The present paper shows that the usual factor analytic structured data dispersion matrix lambda psi lambda' + delta can readily arise from a set of scores y = lambda eta + epsilon, shere the "common" (eta) and "unique" (epsilon) factors have nonzero covariance: gamma = Cov epsilon,eta) is not equal to 0. Implications of this finding are discussed…
Covariant quantizations in plane and curved spaces
International Nuclear Information System (INIS)
Assirati, J.L.M.; Gitman, D.M.
2017-01-01
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Covariant quantizations in plane and curved spaces
Energy Technology Data Exchange (ETDEWEB)
Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)
2017-07-15
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Students’ Covariational Reasoning in Solving Integrals’ Problems
Harini, N. V.; Fuad, Y.; Ekawati, R.
2018-01-01
Covariational reasoning plays an important role to indicate quantities vary in learning calculus. This study investigates students’ covariational reasoning during their studies concerning two covarying quantities in integral problem. Six undergraduate students were chosen to solve problems that involved interpreting and representing how quantities change in tandem. Interviews were conducted to reveal the students’ reasoning while solving covariational problems. The result emphasizes that undergraduate students were able to construct the relation of dependent variables that changes in tandem with the independent variable. However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques.
Covariant Quantization with Extended BRST Symmetry
Geyer, B.; Gitman, D. M.; Lavrov, P. M.
1999-01-01
A short rewiev of covariant quantization methods based on BRST-antiBRST symmetry is given. In particular problems of correct definition of Sp(2) symmetric quantization scheme known as triplectic quantization are considered.
Covariant extensions and the nonsymmetric unified field
International Nuclear Information System (INIS)
Borchsenius, K.
1976-01-01
The problem of generally covariant extension of Lorentz invariant field equations, by means of covariant derivatives extracted from the nonsymmetric unified field, is considered. It is shown that the contracted curvature tensor can be expressed in terms of a covariant gauge derivative which contains the gauge derivative corresponding to minimal coupling, if the universal constant p, characterizing the nonsymmetric theory, is fixed in terms of Planck's constant and the elementary quantum of charge. By this choice the spinor representation of the linear connection becomes closely related to the spinor affinity used by Infeld and Van Der Waerden (Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl.; 9:380 (1933)) in their generally covariant formulation of Dirac's equation. (author)
Covariance Spectroscopy for Fissile Material Detection
International Nuclear Information System (INIS)
Trainham, Rusty; Tinsley, Jim; Hurley, Paul; Keegan, Ray
2009-01-01
Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem
Covariant amplitudes in Polyakov string theory
International Nuclear Information System (INIS)
Aoyama, H.; Dhar, A.; Namazie, M.A.
1986-01-01
A manifestly Lorentz-covariant and reparametrization-invariant procedure for computing string amplitudes using Polyakov's formulation is described. Both bosonic and superstring theories are dealt with. The computation of string amplitudes is greatly facilitated by this formalism. (orig.)
Covariance upperbound controllers for networked control systems
International Nuclear Information System (INIS)
Ko, Sang Ho
2012-01-01
This paper deals with designing covariance upperbound controllers for a linear system that can be used in a networked control environment in which control laws are calculated in a remote controller and transmitted through a shared communication link to the plant. In order to compensate for possible packet losses during the transmission, two different techniques are often employed: the zero-input and the hold-input strategy. These use zero input and the latest control input, respectively, when a packet is lost. For each strategy, we synthesize a class of output covariance upperbound controllers for a given covariance upperbound and a packet loss probability. Existence conditions of the covariance upperbound controller are also provided for each strategy. Through numerical examples, performance of the two strategies is compared in terms of feasibility of implementing the controllers
Forecasting Covariance Matrices: A Mixed Frequency Approach
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...
Covariance data evaluation for experimental data
International Nuclear Information System (INIS)
Liu Tingjin
1993-01-01
Some methods and codes have been developed and utilized for covariance data evaluation of experimental data, including parameter analysis, physical analysis, Spline fitting etc.. These methods and codes can be used in many different cases
Earth Observing System Covariance Realism Updates
Ojeda Romero, Juan A.; Miguel, Fred
2017-01-01
This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.
Laser Covariance Vibrometry for Unsymmetrical Mode Detection
National Research Council Canada - National Science Library
Kobold, Michael C
2006-01-01
Simulated cross - spectral covariance (CSC) from optical return from simulated surface vibration indicates CW phase modulation may be an appropriate phenomenology for adequate classification of vehicles by structural mode...
Error Covariance Estimation of Mesoscale Data Assimilation
National Research Council Canada - National Science Library
Xu, Qin
2005-01-01
The goal of this project is to explore and develop new methods of error covariance estimation that will provide necessary statistical descriptions of prediction and observation errors for mesoscale data assimilation...
International Nuclear Information System (INIS)
Leal, Luiz C.; Arbanas, Goran; Derrien, Herve; Wiarda, Dorothea
2008-01-01
Resonance-parameter covariance matrix (RPCM) evaluations in the resolved resonance region were done for 232Th, 233U, 235U, 238U, and 239Pu using the computer code SAMMY. The retroactive approach of the code SAMMY was used to generate the RPCMs for 233U, 235U. RPCMs for 232Th, 238U and 239Pu were generated together with the resonance parameter evaluations. The RPCMs were then converted in the ENDF format using the FILE32 representation. Alternatively, for computer storage reasons, the FILE32 was converted in the FILE33 cross section covariance matrix (CSCM). Both representations were processed using the computer code PUFF-IV. This paper describes the procedures used to generate the RPCM with SAMMY.
Tensor operators in R-matrix approach
International Nuclear Information System (INIS)
Bytsko, A.G.; Rossijskaya Akademiya Nauk, St. Petersburg
1995-12-01
The definitions and some properties (e.g. the Wigner-Eckart theorem, the fusion procedure) of covariant and contravariant q-tensor operators for quasitriangular quantum Lie algebras are formulated in the R-matrix language. The case of U q (sl(n)) (in particular, for n=2) is discussed in more detail. (orig.)
Phase-covariant quantum cloning of qudits
International Nuclear Information System (INIS)
Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin
2003-01-01
We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation
Noncommutative Gauge Theory with Covariant Star Product
International Nuclear Information System (INIS)
Zet, G.
2010-01-01
We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.
Covariant phase difference observables in quantum mechanics
International Nuclear Information System (INIS)
Heinonen, Teiko; Lahti, Pekka; Pellonpaeae, Juha-Pekka
2003-01-01
Covariant phase difference observables are determined in two different ways, by a direct computation and by a group theoretical method. A characterization of phase difference observables which can be expressed as the difference of two phase observables is given. The classical limits of such phase difference observables are determined and the Pegg-Barnett phase difference distribution is obtained from the phase difference representation. The relation of Ban's theory to the covariant phase theories is exhibited
Covariant perturbations of Schwarzschild black holes
International Nuclear Information System (INIS)
Clarkson, Chris A; Barrett, Richard K
2003-01-01
We present a new covariant and gauge-invariant perturbation formalism for dealing with spacetimes having spherical symmetry (or some preferred spatial direction) in the background, and apply it to the case of gravitational wave propagation in a Schwarzschild black-hole spacetime. The 1 + 3 covariant approach is extended to a '1 + 1 + 2 covariant sheet' formalism by introducing a radial unit vector in addition to the timelike congruence, and decomposing all covariant quantities with respect to this. The background Schwarzschild solution is discussed and a covariant characterization is given. We give the full first-order system of linearized 1 + 1 + 2 covariant equations, and we show how, by introducing (time and spherical) harmonic functions, these may be reduced to a system of first-order ordinary differential equations and algebraic constraints for the 1 + 1 + 2 variables which may be solved straightforwardly. We show how both odd- and even-parity perturbations may be unified by the discovery of a covariant, frame- and gauge-invariant, transverse-traceless tensor describing gravitational waves, which satisfies a covariant wave equation equivalent to the Regge-Wheeler equation for both even- and odd-parity perturbations. We show how the Zerilli equation may be derived from this tensor, and derive a similar transverse-traceless tensor equation equivalent to this equation. The so-called special quasinormal modes with purely imaginary frequency emerge naturally. The significance of the degrees of freedom in the choice of the two frame vectors is discussed, and we demonstrate that, for a certain frame choice, the underlying dynamics is governed purely by the Regge-Wheeler tensor. The two transverse-traceless Weyl tensors which carry the curvature of gravitational waves are discussed, and we give the closed system of four first-order ordinary differential equations describing their propagation. Finally, we consider the extension of this work to the study of
Bodewig, E
1959-01-01
Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well
Multilevel covariance regression with correlated random effects in the mean and variance structure.
Quintero, Adrian; Lesaffre, Emmanuel
2017-09-01
Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Daniel Bartz
Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Covariance methodology applied to uncertainties in I-126 disintegration rate measurements
International Nuclear Information System (INIS)
Fonseca, K.A.; Koskinas, M.F.; Dias, M.S.
1996-01-01
The covariance methodology applied to uncertainties in 126 I disintegration rate measurements is described. Two different coincidence systems were used due to the complex decay scheme of this radionuclide. The parameters involved in the determination of the disintegration rate in each experimental system present correlated components. In this case, the conventional statistical methods to determine the uncertainties (law of propagation) result in wrong values for the final uncertainty. Therefore, use of the methodology of the covariance matrix is necessary. The data from both systems were combined taking into account all possible correlations between the partial uncertainties. (orig.)
Non-stationary pre-envelope covariances of non-classically damped systems
Muscolino, G.
1991-08-01
A new formulation is given to evaluate the stationary and non-stationary response of linear non-classically damped systems subjected to multi-correlated non-separable Gaussian input processes. This formulation is based on a new and more suitable definition of the impulse response function matrix for such systems. It is shown that, when using this definition, the stochastic response of non-classically damped systems involves the evaluation of quantities similar to those of classically damped ones. Furthermore, considerations about non-stationary cross-covariances, spectral moments and pre-envelope cross-covariances are presented for a monocorrelated input process.
Ole E. Barndorff-Nielsen; Neil Shephard
2002-01-01
This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...
Are your covariates under control? How normalization can re-introduce covariate effects.
Pain, Oliver; Dudbridge, Frank; Ronald, Angelica
2018-04-30
Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
Rigorous covariance propagation of geoid errors to geodetic MDT estimates
Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.
2012-04-01
The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.
Universal correlations and power-law tails in financial covariance matrices
Akemann, G.; Fischmann, J.; Vivo, P.
2010-07-01
We investigate whether quantities such as the global spectral density or individual eigenvalues of financial covariance matrices can be best modelled by standard random matrix theory or rather by its generalisations displaying power-law tails. In order to generate individual eigenvalue distributions a chopping procedure is devised, which produces a statistical ensemble of asset-price covariances from a single instance of financial data sets. Local results for the smallest eigenvalue and individual spacings are very stable upon reshuffling the time windows and assets. They are in good agreement with the universal Tracy-Widom distribution and Wigner surmise, respectively. This suggests a strong degree of robustness especially in the low-lying sector of the spectra, most relevant for portfolio selections. Conversely, the global spectral density of a single covariance matrix as well as the average over all unfolded nearest-neighbour spacing distributions deviate from standard Gaussian random matrix predictions. The data are in fair agreement with a recently introduced generalised random matrix model, with correlations showing a power-law decay.
Nuclear data covariances in the Indian context
International Nuclear Information System (INIS)
Ganesan, S.
2014-01-01
The topic of covariances is recognized as an important part of several ongoing nuclear data science activities, since 2007, in the Nuclear Data Physics Centre of India (NDPCI). A Phase-1 project in collaboration with the Statistics department in Manipal University, Karnataka (Prof. K.M. Prasad and Prof. S. Nair) on nuclear data covariances was executed successfully during 2007-2011 period. In Phase-I, the NDPCI has conducted three national Theme meetings sponsored by the DAE-BRNS in 2008, 2010 and 2013 on nuclear data covariances. In Phase-1, the emphasis was on a thorough basic understanding of the concept of covariances including assigning uncertainties to experimental data in terms of partial errors and micro correlations, through a study and a detailed discussion of open literature. Towards the end of Phase-1, measurements and a first time covariance analysis of cross-sections for 58 Ni (n, p) 58 Co reaction measured in Mumbai Pelletron accelerator using 7 Li (p,n) reactions as neutron source in the MeV energy region were performed under a PhD programme on nuclear data covariances in which enrolled are two students, Shri B.S. Shivashankar and Ms. Shanti Sheela. India is also successfully evolving a team of young researchers to code nuclear data of uncertainties, with the perspectives on covariances, in the IAEA-EXFOR format. A Phase-II DAE-BRNS-NDPCI proposal of project at Manipal has been submitted and the proposal is undergoing a peer-review at this time. In Phase-2, modern nuclear data evaluation techniques that including covariances will be further studied as a research and development effort, as a first time effort. These efforts include the use of techniques such as that of the Kalman filter. Presently, a 48 hours lecture series on treatment of errors and their propagation is being formulated under auspices of the Homi Bhabha National Institute. The talk describes the progress achieved thus far in the learning curve of the above-mentioned and exciting
Schroedinger covariance states in anisotropic waveguides
International Nuclear Information System (INIS)
Angelow, A.; Trifonov, D.
1995-03-01
In this paper Squeezed and Covariance States based on Schroedinger inequality and their connection with other nonclassical states are considered for particular case of anisotropic waveguide in LiNiO 3 . Here, the problem of photon creation and generation of squeezed and Schroedinger covariance states in optical waveguides is solved in two steps: 1. Quantization of electromagnetic field is provided in the presence of dielectric waveguide using normal-mode expansion. The photon creation and annihilation operators are introduced, expanding the solution A-vector(r-vector,t) in a series in terms of the Sturm - Liouville mode-functions. 2. In terms of these operators the Hamiltonian of the field in a nonlinear waveguide is derived. For such Hamiltonian we construct the covariance states as stable (with nonzero covariance), which minimize the Schroedinger uncertainty relation. The evolutions of the three second momenta of q-circumflex j and p-circumflex j are calculated. For this Hamiltonian all three momenta are expressed in terms of one real parameters s only. It is found out how covariance, via this parameter s, depends on the waveguide profile n(x,y), on the mode-distributions u-vector j (x,y), and on the waveguide phase mismatching Δβ. (author). 37 refs
Form of the manifestly covariant Lagrangian
Johns, Oliver Davis
1985-10-01
The preferred form for the manifestly covariant Lagrangian function of a single, charged particle in a given electromagnetic field is the subject of some disagreement in the textbooks. Some authors use a ``homogeneous'' Lagrangian and others use a ``modified'' form in which the covariant Hamiltonian function is made to be nonzero. We argue in favor of the ``homogeneous'' form. We show that the covariant Lagrangian theories can be understood only if one is careful to distinguish quantities evaluated on the varied (in the sense of the calculus of variations) world lines from quantities evaluated on the unvaried world lines. By making this distinction, we are able to derive the Hamilton-Jacobi and Klein-Gordon equations from the ``homogeneous'' Lagrangian, even though the covariant Hamiltonian function is identically zero on all world lines. The derivation of the Klein-Gordon equation in particular gives Lagrangian theoretical support to the derivations found in standard quantum texts, and is also shown to be consistent with the Feynman path-integral method. We conclude that the ``homogeneous'' Lagrangian is a completely adequate basis for covariant Lagrangian theory both in classical and quantum mechanics. The article also explores the analogy with the Fermat theorem of optics, and illustrates a simple invariant notation for the Lagrangian and other four-vector equations.
Covariance structure in the skull of Catarrhini: a case of pattern stasis and magnitude evolution.
de Oliveira, Felipe Bandoni; Porto, Arthur; Marroig, Gabriel
2009-04-01
The study of the genetic variance/covariance matrix (G-matrix) is a recent and fruitful approach in evolutionary biology, providing a window of investigating for the evolution of complex characters. Although G-matrix studies were originally conducted for microevolutionary timescales, they could be extrapolated to macroevolution as long as the G-matrix remains relatively constant, or proportional, along the period of interest. A promising approach to investigating the constancy of G-matrices is to compare their phenotypic counterparts (P-matrices) in a large group of related species; if significant similarity is found among several taxa, it is very likely that the underlying G-matrices are also equivalent. Here we study the similarity of covariance and correlation structure in a broad sample of Old World monkeys and apes (Catarrhini). We made phylogenetically structured comparisons of correlation and covariance matrices derived from 39 skull traits, ranging from between species to the superfamily level. We also compared the overall magnitude of integration between skull traits (r2) for all Catarrhini genera. Our results show that P-matrices were not strictly constant among catarrhines, but the amount of divergence observed among taxa was generally low. There was significant and positive correlation between the amount of divergence in correlation and covariance patterns among the 30 genera and their phylogenetic distances derived from a recently proposed phylogenetic hypothesis. Our data demonstrate that the P-matrices remained relatively similar along the evolutionary history of catarrhines, and comparisons with the G-matrix available for a New World monkey genus (Saguinus) suggests that the same holds for all anthropoids. The magnitude of integration, in contrast, varied considerably among genera, indicating that evolution of the magnitude, rather than the pattern of inter-trait correlations, might have played an important role in the diversification of the
One-loop matching and running with covariant derivative expansion
Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi
2018-01-01
We develop tools for performing effective field theory (EFT) calculations in a manifestly gauge-covariant fashion. We clarify how functional methods account for one-loop diagrams resulting from the exchange of both heavy and light fields, as some confusion has recently arisen in the literature. To efficiently evaluate functional traces containing these "mixed" one-loop terms, we develop a new covariant derivative expansion (CDE) technique that is capable of evaluating a much wider class of traces than previous methods. The technique is detailed in an appendix, so that it can be read independently from the rest of this work. We review the well-known matching procedure to one-loop order with functional methods. What we add to this story is showing how to isolate one-loop terms coming from diagrams involving only heavy propagators from diagrams with mixed heavy and light propagators. This is done using a non-local effective action, which physically connects to the notion of "integrating out" heavy fields. Lastly, we show how to use a CDE to do running analyses in EFTs, i.e. to obtain the anomalous dimension matrix. We demonstrate the methodologies by several explicit example calculations.
A FORMALISM FOR COVARIANT POLARIZED RADIATIVE TRANSPORT BY RAY TRACING
International Nuclear Information System (INIS)
Gammie, Charles F.; Leung, Po Kin
2012-01-01
We write down a covariant formalism for polarized radiative transfer appropriate for ray tracing through a turbulent plasma. The polarized radiation field is represented by the polarization tensor (coherency matrix) N αβ ≡ (a α k a* β k ), where a k is a Fourier coefficient for the vector potential. Using Maxwell's equations, the Liouville-Vlasov equation, and the WKB approximation, we show that the transport equation in vacuo is k μ ∇ μ N αβ = 0. We show that this is equivalent to Broderick and Blandford's formalism based on invariant Stokes parameters and a rotation coefficient, and suggest a modification that may reduce truncation error in some situations. Finally, we write down several alternative approaches to integrating the transfer equation.
Covariant differential calculus on the quantum exterior vector space
International Nuclear Information System (INIS)
Parashar, P.; Soni, S.K.
1992-01-01
We formulate a differential calculus on the quantum exterior vector space spanned by the generators of a non-anticommutative algebra satisfying r ij = θ i θ j +B kl ij θ k θ l =0 i, j=1, 2, ..., n. and (θ i ) 2 =(θ j ) 2 =...=(θ n ) 2 =0, where B kl ij is the most general matrix defined in terms of complex deformation parameters. Following considerations analogous to those of Wess and Zumino, we are able to exhibit covariance of our calculus under ( 2 n )+1 parameter deformation of GL(n) and explicitly check that the non-anticommutative differential calculus satisfies the general constraints given by them, such as the 'linear' conditions dr ij ≅0 and the 'quadratic' condition r ij x n ≅0 where x n =dθ n are the differentials of the variables. (orig.)
Homonuclear long-range correlation spectra from HMBC experiments by covariance processing.
Schoefberger, Wolfgang; Smrecki, Vilko; Vikić-Topić, Drazen; Müller, Norbert
2007-07-01
We present a new application of covariance nuclear magnetic resonance processing based on 1H--13C-HMBC experiments which provides an effective way for establishing indirect 1H--1H and 13C--13C nuclear spin connectivity at natural isotope abundance. The method, which identifies correlated spin networks in terms of covariance between one-dimensional traces from a single decoupled HMBC experiment, derives 13C--13C as well as 1H--1H spin connectivity maps from the two-dimensional frequency domain heteronuclear long-range correlation data matrix. The potential and limitations of this novel covariance NMR application are demonstrated on two compounds: eugenyl-beta-D-glucopyranoside and an emodin-derivative. Copyright (c) 2007 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Craps, Ben; Evnin, Oleg; Nguyen, Kévin
2017-01-01
Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.
Craps, Ben; Evnin, Oleg; Nguyen, Kévin
2017-02-01
Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.
Energy Technology Data Exchange (ETDEWEB)
Craps, Ben [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Evnin, Oleg [Department of Physics, Faculty of Science, Chulalongkorn University, Thanon Phayathai, Pathumwan, Bangkok 10330 (Thailand); Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Nguyen, Kévin [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium)
2017-02-08
Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.
Group covariant protocols for quantum string commitment
International Nuclear Information System (INIS)
Tsurumaru, Toyohiro
2006-01-01
We study the security of quantum string commitment (QSC) protocols with group covariant encoding scheme. First we consider a class of QSC protocol, which is general enough to incorporate all the QSC protocols given in the preceding literatures. Then among those protocols, we consider group covariant protocols and show that the exact upperbound on the binding condition can be calculated. Next using this result, we prove that for every irreducible representation of a finite group, there always exists a corresponding nontrivial QSC protocol which reaches a level of security impossible to achieve classically
The covariant entropy bound in gravitational collapse
International Nuclear Information System (INIS)
Gao, Sijie; Lemos, Jose P. S.
2004-01-01
We study the covariant entropy bound in the context of gravitational collapse. First, we discuss critically the heuristic arguments advanced by Bousso. Then we solve the problem through an exact model: a Tolman-Bondi dust shell collapsing into a Schwarzschild black hole. After the collapse, a new black hole with a larger mass is formed. The horizon, L, of the old black hole then terminates at the singularity. We show that the entropy crossing L does not exceed a quarter of the area of the old horizon. Therefore, the covariant entropy bound is satisfied in this process. (author)
Modular invariance and covariant loop calculus
International Nuclear Information System (INIS)
Petersen, J.L.; Roland, K.O.; Sidenius, J.R.
1988-01-01
The covariant loop calculus provides and efficient technique for computing explicit expressions for the density on moduli space corresponding to arbitrary (bosonic string) loop diagrams. Since modular invariance is not manifest, however, we carry out a detailed comparison with known explicit 2- and 3- loop results derived using analytic geometry (1 loop is known to be ok). We establish identity to 'high' order in some moduli and exactly in others. Agreement is found as a result of various non-trivial cancellations, in part related to number theory. We feel our results provide very strong support for the correctness of the covariant loop calculus approach. (orig.)
Remarks on Bousso's covariant entropy bound
Mayo, A E
2002-01-01
Bousso's covariant entropy bound is put to the test in the context of a non-singular cosmological solution of general relativity found by Bekenstein. Although the model complies with every assumption made in Bousso's original conjecture, the entropy bound is violated due to the occurrence of negative energy density associated with the interaction of some the matter components in the model. We demonstrate how this property allows for the test model to 'elude' a proof of Bousso's conjecture which was given recently by Flanagan, Marolf and Wald. This corroborates the view that the covariant entropy bound should be applied only to stable systems for which every matter component carries positive energy density.
Modular invariance and covariant loop calculus
International Nuclear Information System (INIS)
Petersen, J.L.; Roland, K.O.; Sidenius, J.R.
1988-01-01
The covariant loop calculus provides an efficient technique for computing explicit expressions for the density on moduli space corresponding to arbitrary (bosonic string) loop diagrams. Since modular invariance is not manifest, however, we carry out a detailed comparison with known explicit two- and three-loop results derived using analytic geometry (one loop is known to be okay). We establish identity to 'high' order in some moduli and exactly in others. Agreement is found as a result of various nontrivial cancellations, in part related to number theory. We feel our results provide very strong support for the correctness of the covariant loop calculus approach. (orig.)
Activities on covariance estimation in Japanese Nuclear Data Committee
Energy Technology Data Exchange (ETDEWEB)
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
Covariant canonical quantization of fields and Bohmian mechanics
International Nuclear Information System (INIS)
Nikolic, H.
2005-01-01
We propose a manifestly covariant canonical method of field quantization based on the classical De Donder-Weyl covariant canonical formulation of field theory. Owing to covariance, the space and time arguments of fields are treated on an equal footing. To achieve both covariance and consistency with standard non-covariant canonical quantization of fields in Minkowski spacetime, it is necessary to adopt a covariant Bohmian formulation of quantum field theory. A preferred foliation of spacetime emerges dynamically owing to a purely quantum effect. The application to a simple time-reparametrization invariant system and quantum gravity is discussed and compared with the conventional non-covariant Wheeler-DeWitt approach. (orig.)
Perils of parsimony: properties of reduced-rank estimates of genetic covariance matrices.
Meyer, Karin; Kirkpatrick, Mark
2008-10-01
Eigenvalues and eigenvectors of covariance matrices are important statistics for multivariate problems in many applications, including quantitative genetics. Estimates of these quantities are subject to different types of bias. This article reviews and extends the existing theory on these biases, considering a balanced one-way classification and restricted maximum-likelihood estimation. Biases are due to the spread of sample roots and arise from ignoring selected principal components when imposing constraints on the parameter space, to ensure positive semidefinite estimates or to estimate covariance matrices of chosen, reduced rank. In addition, it is shown that reduced-rank estimators that consider only the leading eigenvalues and -vectors of the "between-group" covariance matrix may be biased due to selecting the wrong subset of principal components. In a genetic context, with groups representing families, this bias is inverse proportional to the degree of genetic relationship among family members, but is independent of sample size. Theoretical results are supplemented by a simulation study, demonstrating close agreement between predicted and observed bias for large samples. It is emphasized that the rank of the genetic covariance matrix should be chosen sufficiently large to accommodate all important genetic principal components, even though, paradoxically, this may require including a number of components with negligible eigenvalues. A strategy for rank selection in practical analyses is outlined.
Generation of covariance data among values from a single set of experiments
International Nuclear Information System (INIS)
Smith, D.L.
1992-01-01
Modern nuclear data evaluation methods demand detailed uncertainty information for all input results to be considered. It can be shown from basic statistical principles that provision of a covariance matrix for a set of data provides the necessary information for its proper consideration in the context of other included experimental data and/or a priori representations of the physical parameters in question. This paper examines how an experimenter should go about preparing the covariance matrix for any single experimental data set he intends to report. The process involves detailed examination of the experimental procedures, identification of all error sources (both random and systematic); and consideration of any internal discrepancies. Some specific examples are given to illustrate the methods and principles involved
Zero curvature conditions and conformal covariance
International Nuclear Information System (INIS)
Akemann, G.; Grimm, R.
1992-05-01
Two-dimensional zero curvature conditions were investigated in detail, with special emphasis on conformal properties, and the appearance of covariant higher order differential operators constructed in terms of a projective connection was elucidated. The analysis is based on the Kostant decomposition of simple Lie algebras in terms of representations with respect to their 'principal' SL(2) subalgebra. (author) 27 refs
On superfield covariant quantization in general coordinates
International Nuclear Information System (INIS)
Gitman, D.M.; Moshin, P. Yu.; Tomazelli, J.L.
2005-01-01
We propose a natural extension of the BRST-antiBRST superfield covariant scheme in general coordinates. Thus, the coordinate dependence of the basic tensor fields and scalar density of the formalism is extended from the base supermanifold to the complete set of superfield variables. (orig.)
On superfield covariant quantization in general coordinates
Energy Technology Data Exchange (ETDEWEB)
Gitman, D.M. [Universidade de Sao Paulo, Instituto de Fisica, Sao Paulo, S.P (Brazil); Moshin, P. Yu. [Universidade de Sao Paulo, Instituto de Fisica, Sao Paulo, S.P (Brazil); Tomsk State Pedagogical University, Tomsk (Russian Federation); Tomazelli, J.L. [UNESP, Departamento de Fisica e Quimica, Campus de Guaratingueta (Brazil)
2005-12-01
We propose a natural extension of the BRST-antiBRST superfield covariant scheme in general coordinates. Thus, the coordinate dependence of the basic tensor fields and scalar density of the formalism is extended from the base supermanifold to the complete set of superfield variables. (orig.)
Covariant field theory of closed superstrings
International Nuclear Information System (INIS)
Siopsis, G.
1989-01-01
The authors construct covariant field theories of both type-II and heterotic strings. Toroidal compactification is also considered. The interaction vertices are based on Witten's vertex representing three strings interacting at the mid-point. For closed strings, the authors thus obtain a bilocal interaction
Conformally covariant composite operators in quantum chromodynamics
International Nuclear Information System (INIS)
Craigie, N.S.; Dobrev, V.K.; Todorov, I.T.
1983-03-01
Conformal covariance is shown to determine renormalization properties of composite operators in QCD and in the C 6 3 -model at the one-loop level. Its relevance to higher order (renormalization group improved) perturbative calculations in the short distance limit is also discussed. Light cone operator product expansions and spectral representations for wave functions in QCD are derived. (author)
Soft covariant gauges on the lattice
Energy Technology Data Exchange (ETDEWEB)
Henty, D.S.; Oliveira, O.; Parrinello, C.; Ryan, S. [Department of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3JZ, Scotland (UKQCD Collaboration)
1996-12-01
We present an exploratory study of a one-parameter family of covariant, nonperturbative lattice gauge-fixing conditions that can be implemented through a simple Monte Carlo algorithm. We demonstrate that at the numerical level the procedure is feasible, and as a first application we examine the gauge dependence of the gluon propagator. {copyright} {ital 1996 The American Physical Society.}
Covariant differential calculus on the quantum hyperplane
International Nuclear Information System (INIS)
Wess, J.
1991-01-01
We develop a differential calculus on the quantum hyperplane covariant with respect to the action of the quantum group GL q (n). This is a concrete example of noncommutative differential geometry. We describe the general constraints for a noncommutative differential calculus and verify that the example given here satisfies all these constraints. We also discuss briefly the integration over the quantum plane. (orig.)
Covariant single-hole optical potential
International Nuclear Information System (INIS)
Kam, J. de
1982-01-01
In this investigation a covariant optical potential model is constructed for scattering processes of mesons from nuclei in which the meson interacts repeatedly with one of the target nucleons. The nuclear binding interactions in the intermediate scattering state are consistently taken into account. In particular for pions and K - projectiles this is important in view of the strong energy dependence of the elementary projectile-nucleon amplitude. Furthermore, this optical potential satisfies unitarity and relativistic covariance. The starting point in our discussion is the three-body model for the optical potential. To obtain a practical covariant theory I formulate the three-body model as a relativistic quasi two-body problem. Expressions for the transition interactions and propagators in the quasi two-body equations are found by imposing the correct s-channel unitarity relations and by using dispersion integrals. This is done in such a way that the correct non-relativistic limit is obtained, avoiding clustering problems. Corrections to the quasi two-body treatment from the Pauli principle and the required ground-state exclusion are taken into account. The covariant equations that we arrive at are amenable to practical calculations. (orig.)
Nonlinear realization of general covariance group
International Nuclear Information System (INIS)
Hamamoto, Shinji
1979-01-01
The structure of the theory resulting from the nonlinear realization of general covariance group is analysed. We discuss the general form of free Lagrangian for Goldstone fields, and propose as a special choice one reasonable form which is shown to describe a gravitational theory with massless tensor graviton and massive vector tordion. (author)
Covariant quantum mechanics on a null plane
International Nuclear Information System (INIS)
Leutwyler, H.; Stern, J.
1977-03-01
Lorentz invariance implies that the null plane wave functions factorize into a kinematical part describing the motion of the system as a whole and an inner wave function that involves the specific dynamical properties of the system - in complete correspondence with the non-relativistic situation. Covariance is equivalent to an angular condition which admits non-trivial solutions
Approximate methods for derivation of covariance data
International Nuclear Information System (INIS)
Tagesen, S.
1992-01-01
Several approaches for the derivation of covariance information for evaluated nuclear data files (EFF2 and ENDF/B-VI) have been developed and used at IRK and ORNL respectively. Considerations, governing the choice of a distinct method depending on the quantity and quality of available data are presented, advantages/disadvantages are discussed and examples of results are given
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
Asymptotics for the minimum covariance determinant estimator
Butler, R.W.; Davies, P.L.; Jhun, M.
1993-01-01
Consistency is shown for the minimum covariance determinant (MCD) estimators of multivariate location and scale and asymptotic normality is shown for the former. The proofs are made possible by showing a separating ellipsoid property for the MCD subset of observations. An analogous property is shown
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan
2011-10-10
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
On the Methodology to Calculate the Covariance of Estimated Resonance Parameters
International Nuclear Information System (INIS)
Becker, B.; Kopecky, S.; Schillebeeckx, P.
2015-01-01
Principles to determine resonance parameters and their covariance from experimental data are discussed. Different methods to propagate the covariance of experimental parameters are compared. A full Bayesian statistical analysis reveals that the level to which the initial uncertainty of the experimental parameters propagates, strongly depends on the experimental conditions. For high precision data the initial uncertainties of experimental parameters, like a normalization factor, has almost no impact on the covariance of the parameters in case of thick sample measurements and conventional uncertainty propagation or full Bayesian analysis. The covariances derived from a full Bayesian analysis and least-squares fit are derived under the condition that the model describing the experimental observables is perfect. When the quality of the model can not be verified a more conservative method based on a renormalization of the covariance matrix is recommended to propagate fully the uncertainty of experimental systematic effects. Finally, neutron resonance transmission analysis is proposed as an accurate method to validate evaluated data libraries in the resolved resonance region
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan; Huang, Jianhua Z.
2011-01-01
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
Anomaly detection in OECD Benchmark data using co-variance methods
International Nuclear Information System (INIS)
Srinivasan, G.S.; Krinizs, K.; Por, G.
1993-02-01
OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2004-01-01
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....
Duality covariant type-IIB supersymmetry and nonperturbative consequences
International Nuclear Information System (INIS)
Bars, I.
1997-01-01
Type-IIB supersymmetric theories have an SL(2,Z) invariance, known as U duality, which controls the nonperturbative behavior of the theory. Under SL(2,Z) the supercharges are doublets, implying that the bosonic charges would be singlets or triplets. However, among the bosonic charges there are doublet strings and doublet five-branes which are in conflict with the doublet property of the supercharges. It is shown that the conflict is resolved by structure constants that depend on moduli, such as the tau parameter, which transform under the same SL(2,Z). The resulting superalgebra encodes the nonperturbative duality properties of the theory and is valid for any value of the string coupling constant. The usefulness of the formalism is illustrated by applying it to purely algebraic computations of the tension of (p,q) strings, and the mass and entropy of extremal black holes constructed from D-1-branes and D-5-branes. In the latter case the nonperturbative coupling dependence of the BPS mass and renormalization is computed for the first time in this paper. It is further argued that the moduli dependence of the superalgebra provides hints for four more dimensions beyond ten, such that the superalgebra is embedded in a fundamental theory which would be covariant under SO(11,3). An outline is given for a matrix theory in 14 dimensions that would be consistent with M(atrix) theory as well as with the above observations. copyright 1997 The American Physical Society
Covariance of dynamic strain responses for structural damage detection
Li, X. Y.; Wang, L. X.; Law, S. S.; Nie, Z. H.
2017-10-01
A new approach to address the practical problems with condition evaluation/damage detection of structures is proposed based on the distinct features of a new damage index. The covariance of strain response function (CoS) is a function of modal parameters of the structure. A local stiffness reduction in structure would cause monotonous increase in the CoS. Its sensitivity matrix with respect to local damages of structure is negative and narrow-banded. The damage extent can be estimated with an approximation to the sensitivity matrix to decouple the identification equations. The CoS sensitivity can be calibrated in practice from two previous states of measurements to estimate approximately the damage extent of a structure. A seven-storey plane frame structure is numerically studied to illustrate the features of the CoS index and the proposed method. A steel circular arch in the laboratory is tested. Natural frequencies changed due to damage in the arch and the damage occurrence can be judged. However, the proposed CoS method can identify not only damage happening but also location, even damage extent without need of an analytical model. It is promising for structural condition evaluation of selected components.
Large-region acoustic source mapping using a movable array and sparse covariance fitting.
Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2017-01-01
Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].
Zhan, Xingzhi
2002-01-01
The main purpose of this monograph is to report on recent developments in the field of matrix inequalities, with emphasis on useful techniques and ingenious ideas. Among other results this book contains the affirmative solutions of eight conjectures. Many theorems unify or sharpen previous inequalities. The author's aim is to streamline the ideas in the literature. The book can be read by research workers, graduate students and advanced undergraduates.
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.
International Nuclear Information System (INIS)
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references
International Nuclear Information System (INIS)
Putter, Roland de; Wagner, Christian; Verde, Licia; Mena, Olga; Percival, Will J.
2012-01-01
Accurate power spectrum (or correlation function) covariance matrices are a crucial requirement for cosmological parameter estimation from large scale structure surveys. In order to minimize reliance on computationally expensive mock catalogs, it is important to have a solid analytic understanding of the different components that make up a covariance matrix. Considering the matter power spectrum covariance matrix, it has recently been found that there is a potentially dominant effect on mildly non-linear scales due to power in modes of size equal to and larger than the survey volume. This beat coupling effect has been derived analytically in perturbation theory and while it has been tested with simulations, some questions remain unanswered. Moreover, there is an additional effect of these large modes, which has so far not been included in analytic studies, namely the effect on the estimated average density which enters the power spectrum estimate. In this article, we work out analytic, perturbation theory based expressions including both the beat coupling and this local average effect and we show that while, when isolated, beat coupling indeed causes large excess covariance in agreement with the literature, in a realistic scenario this is compensated almost entirely by the local average effect, leaving only ∼ 10% of the excess. We test our analytic expressions by comparison to a suite of large N-body simulations, using both full simulation boxes and subboxes thereof to study cases without beat coupling, with beat coupling and with both beat coupling and the local average effect. For the variances, we find excellent agreement with the analytic expressions for k −1 at z = 0.5, while the correlation coefficients agree to beyond k = 0.4 hMpc −1 . As expected, the range of agreement increases towards higher redshift and decreases slightly towards z = 0. We finish by including the large-mode effects in a full covariance matrix description for arbitrary survey
Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em
2017-09-01
We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we
Determination of covariant Schwinger terms in anomalous gauge theories
International Nuclear Information System (INIS)
Kelnhofer, G.
1991-01-01
A functional integral method is used to determine equal time commutators between the covariant currents and the covariant Gauss-law operators in theories which are affected by an anomaly. By using a differential geometrical setup we show how the derivation of consistent- and covariant Schwinger terms can be understood on an equal footing. We find a modified consistency condition for the covariant anomaly. As a by-product the Bardeen-Zumino functional, which relates consistent and covariant anomalies, can be interpreted as connection on a certain line bundle over all gauge potentials. Finally the covariant commutator anomalies are calculated for the two- and four dimensional case. (orig.)
Ex post socio-economic assessment of the Oresund Bridge
DEFF Research Database (Denmark)
Knudsen, M.Aa.; Rich, Jeppe
2013-01-01
-economic assessment, the consumer benefits including all freight and passenger modes, are compared with the cost profile of the bridge. The monetary contributions are extrapolated to a complete 50 year period. It is revealed that the bridge from 2000–2010 generated a consumer surplus of €2 billion in 2000 prices...... to the current transport flows. The importance of having the right assumptions and the ability to model the phasing-in process are underlined. Secondly, we offer a wider discussion on why some projects are more beneficial than others. This is done by comparing the Oresund Bridge, the Channel Tunnel...
Ex-Post : The Investment Performance of Collectible Stamps
Dimson, E.; Spaenjers, C.
2009-01-01
This paper investigates the returns on British collectible postage stamps over the very long run, based on stamp catalogue prices. Between 1900 and 2008, we find an annualized return on stamps of 6.7% in nominal terms, which is equivalent to an average real return of 2.7% per annum. Prices have
Paragrassmann analysis and covariant quantum algebras
International Nuclear Information System (INIS)
Filippov, A.T.; Isaev, A.P.; Kurdikov, A.B.; Pyatov, P.N.
1993-01-01
This report is devoted to the consideration from the algebraic point of view the paragrassmann algebras with one and many paragrassmann generators Θ i , Θ p+1 i = 0. We construct the paragrassmann versions of the Heisenberg algebra. For the special case, this algebra is nothing but the algebra for coordinates and derivatives considered in the context of covariant differential calculus on quantum hyperplane. The parameter of deformation q in our case is (p+1)-root of unity. Our construction is nondegenerate only for even p. Taking bilinear combinations of paragrassmann derivatives and coordinates we realize generators for the covariant quantum algebras as tensor products of (p+1) x (p+1) matrices. (orig./HSI)
Twisted covariant noncommutative self-dual gravity
International Nuclear Information System (INIS)
Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.
2008-01-01
A twisted covariant formulation of noncommutative self-dual gravity is presented. The formulation for constructing twisted noncommutative Yang-Mills theories is used. It is shown that the noncommutative torsion is solved at any order of the θ expansion in terms of the tetrad and some extra fields of the theory. In the process the first order expansion in θ for the Plebanski action is explicitly obtained.
Superfield quantization in Sp(2) covariant formalism
Lavrov, P M
2001-01-01
The rules of the superfield Sp(2) covariant quantization of the arbitrary gauge theories for the case of the introduction of the gauging with the derivative equations for the gauge functional are generalized. The possibilities of realization of the expanded anti-brackets are considered and it is shown, that only one of the realizations is compatible with the transformations of the expanded BRST-symmetry in the form of super translations along the Grassmann superspace coordinates
Torsion and geometrostasis in covariant superstrings
International Nuclear Information System (INIS)
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs
Covariant derivatives of the Berezin transform
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav; Otáhalová, R.
2011-01-01
Roč. 363, č. 10 (2011), s. 5111-5129 ISSN 0002-9947 R&D Projects: GA AV ČR IAA100190802 Keywords : Berezin transform * Berezin symbol * covariant derivative Subject RIV: BA - General Mathematics Impact factor: 1.093, year: 2011 http://www.ams.org/journals/tran/2011-363-10/S0002-9947-2011-05111-1/home.html
Torsion and geometrostasis in covariant superstrings
Energy Technology Data Exchange (ETDEWEB)
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Linear Covariance Analysis for a Lunar Lander
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
The covariant formulation of f ( T ) gravity
International Nuclear Information System (INIS)
Krššák, Martin; Saridakis, Emmanuel N
2016-01-01
We show that the well-known problem of frame dependence and violation of local Lorentz invariance in the usual formulation of f ( T ) gravity is a consequence of neglecting the role of spin connection. We re-formulate f ( T ) gravity starting from, instead of the ‘pure tetrad’ teleparallel gravity, the covariant teleparallel gravity, using both the tetrad and the spin connection as dynamical variables, resulting in a fully covariant, consistent, and frame-independent version of f ( T ) gravity, which does not suffer from the notorious problems of the usual, pure tetrad, f ( T ) theory. We present the method to extract solutions for the most physically important cases, such as the Minkowski, the Friedmann–Robertson–Walker (FRW) and the spherically symmetric ones. We show that in covariant f ( T ) gravity we are allowed to use an arbitrary tetrad in an arbitrary coordinate system along with the corresponding spin connection, resulting always in the same physically relevant field equations. (paper)
Development of covariance capabilities in EMPIRE code
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Covariant electrodynamics in linear media: Optical metric
Thompson, Robert T.
2018-03-01
While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.
Performance of penalized maximum likelihood in estimation of genetic covariances matrices
Directory of Open Access Journals (Sweden)
Meyer Karin
2011-11-01
Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should
Covariance Between Genotypic Effects and its Use for Genomic Inference in Half-Sib Families
Wittenburg, Dörte; Teuscher, Friedrich; Klosa, Jan; Reinsch, Norbert
2016-01-01
In livestock, current statistical approaches utilize extensive molecular data, e.g., single nucleotide polymorphisms (SNPs), to improve the genetic evaluation of individuals. The number of model parameters increases with the number of SNPs, so the multicollinearity between covariates can affect the results obtained using whole genome regression methods. In this study, dependencies between SNPs due to linkage and linkage disequilibrium among the chromosome segments were explicitly considered in methods used to estimate the effects of SNPs. The population structure affects the extent of such dependencies, so the covariance among SNP genotypes was derived for half-sib families, which are typical in livestock populations. Conditional on the SNP haplotypes of the common parent (sire), the theoretical covariance was determined using the haplotype frequencies of the population from which the individual parent (dam) was derived. The resulting covariance matrix was included in a statistical model for a trait of interest, and this covariance matrix was then used to specify prior assumptions for SNP effects in a Bayesian framework. The approach was applied to one family in simulated scenarios (few and many quantitative trait loci) and using semireal data obtained from dairy cattle to identify genome segments that affect performance traits, as well as to investigate the impact on predictive ability. Compared with a method that does not explicitly consider any of the relationship among predictor variables, the accuracy of genetic value prediction was improved by 10–22%. The results show that the inclusion of dependence is particularly important for genomic inference based on small sample sizes. PMID:27402363
Bhatia, Rajendra
1997-01-01
A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...
Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions
Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.
2011-12-01
Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.
ERRORJ. Covariance processing code. Version 2.2
International Nuclear Information System (INIS)
Chiba, Go
2004-07-01
ERRORJ is the covariance processing code that can produce covariance data of multi-group cross sections, which are essential for uncertainty analyses of nuclear parameters, such as neutron multiplication factor. The ERRORJ code can process the covariance data of cross sections including resonance parameters, angular and energy distributions of secondary neutrons. Those covariance data cannot be processed by the other covariance processing codes. ERRORJ has been modified and the version 2.2 has been developed. This document describes the modifications and how to use. The main topics of the modifications are as follows. Non-diagonal elements of covariance matrices are calculated in the resonance energy region. Option for high-speed calculation is implemented. Perturbation amount is optimized in a sensitivity calculation. Effect of the resonance self-shielding on covariance of multi-group cross section can be considered. It is possible to read a compact covariance format proposed by N.M. Larson. (author)
PUFF-IV, Code System to Generate Multigroup Covariance Matrices from ENDF/B-VI Uncertainty Files
International Nuclear Information System (INIS)
2007-01-01
1 - Description of program or function: The PUFF-IV code system processes ENDF/B-VI formatted nuclear cross section covariance data into multigroup covariance matrices. PUFF-IV is the newest release in this series of codes used to process ENDF uncertainty information and to generate the desired multi-group correlation matrix for the evaluation of interest. This version includes corrections and enhancements over previous versions. It is written in Fortran 90 and allows for a more modular design, thus facilitating future upgrades. PUFF-IV enhances support for resonance parameter covariance formats described in the ENDF standard and now handles almost all resonance parameter covariance information in the resolved region, with the exception of the long range covariance sub-subsections. PUFF-IV is normally used in conjunction with an AMPX master library containing group averaged cross section data. Two utility modules are included in this package to facilitate the data interface. The module SMILER allows one to use NJOY generated GENDF files containing group averaged cross section data in conjunction with PUFF-IV. The module COVCOMP allows one to compare two files written in COVERX format. 2 - Methods: Cross section and flux values on a 'super energy grid,' consisting of the union of the required energy group structure and the energy data points in the ENDF/B-V file, are interpolated from the input cross sections and fluxes. Covariance matrices are calculated for this grid and then collapsed to the required group structure. 3 - Restrictions on the complexity of the problem: PUFF-IV cannot process covariance information for energy and angular distributions of secondary particles. PUFF-IV does not process covariance information in Files 34 and 35; nor does it process covariance information in File 40. These new formats will be addressed in a future version of PUFF
International Nuclear Information System (INIS)
Akemann, Gernot; Checinski, Tomasz; Kieburg, Mario
2016-01-01
We compute the spectral statistics of the sum H of two independent complex Wishart matrices, each of which is correlated with a different covariance matrix. Random matrix theory enjoys many applications including sums and products of random matrices. Typically ensembles with correlations among the matrix elements are much more difficult to solve. Using a combination of supersymmetry, superbosonisation and bi-orthogonal functions we are able to determine all spectral k -point density correlation functions of H for arbitrary matrix size N . In the half-degenerate case, when one of the covariance matrices is proportional to the identity, the recent results by Kumar for the joint eigenvalue distribution of H serve as our starting point. In this case the ensemble has a bi-orthogonal structure and we explicitly determine its kernel, providing its exact solution for finite N . The kernel follows from computing the expectation value of a single characteristic polynomial. In the general non-degenerate case the generating function for the k -point resolvent is determined from a supersymmetric evaluation of the expectation value of k ratios of characteristic polynomials. Numerical simulations illustrate our findings for the spectral density at finite N and we also give indications how to do the asymptotic large- N analysis. (paper)
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Belitsky, A. V.
2017-10-01
The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.
Directory of Open Access Journals (Sweden)
A.V. Belitsky
2017-10-01
Full Text Available The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang–Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4 matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.
Comparative Analyses of Phenotypic Trait Covariation within and among Populations.
Peiman, Kathryn S; Robinson, Beren W
2017-10-01
Many morphological, behavioral, physiological, and life-history traits covary across the biological scales of individuals, populations, and species. However, the processes that cause traits to covary also change over these scales, challenging our ability to use patterns of trait covariance to infer process. Trait relationships are also widely assumed to have generic functional relationships with similar evolutionary potentials, and even though many different trait relationships are now identified, there is little appreciation that these may influence trait covariation and evolution in unique ways. We use a trait-performance-fitness framework to classify and organize trait relationships into three general classes, address which ones more likely generate trait covariation among individuals in a population, and review how selection shapes phenotypic covariation. We generate predictions about how trait covariance changes within and among populations as a result of trait relationships and in response to selection and consider how these can be tested with comparative data. Careful comparisons of covariation patterns can narrow the set of hypothesized processes that cause trait covariation when the form of the trait relationship and how it responds to selection yield clear predictions about patterns of trait covariation. We discuss the opportunities and limitations of comparative approaches to evaluate hypotheses about the evolutionary causes and consequences of trait covariation and highlight the importance of evaluating patterns within populations replicated in the same and in different selective environments. Explicit hypotheses about trait relationships are key to generating effective predictions about phenotype and its evolution using covariance data.
Covariant differential calculus on quantum spheres of odd dimension
International Nuclear Information System (INIS)
Welk, M.
1998-01-01
Covariant differential calculus on the quantum spheres S q 2N-1 is studied. Two classification results for covariant first order differential calculi are proved. As an important step towards a description of the noncommutative geometry of the quantum spheres, a framework of covariant differential calculus is established, including first and higher order calculi and a symmetry concept. (author)
On the covariance matrices in the evaluated nuclear data
International Nuclear Information System (INIS)
Corcuera, R.P.
1983-05-01
The implications of the uncertainties of nuclear data on reactor calculations are shown. The concept of variance, covariance and correlation are expressed first by intuitive definitions and then through statistical theory. The format of the covariance data for ENDF/B is explained and the formulas to obtain the multigroup covariances are given. (Author) [pt
Evaluation of covariance in theoretical calculation of nuclear data
International Nuclear Information System (INIS)
Kikuchi, Yasuyuki
1981-01-01
Covariances of the cross sections are discussed on the statistical model calculations. Two categories of covariance are discussed: One is caused by the model approximation and the other by the errors in the model parameters. As an example, the covariances are calculated for 100 Ru. (author)
Covariate Imbalance and Precision in Measuring Treatment Effects
Liu, Xiaofeng Steven
2011-01-01
Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…
Earth Observation System Flight Dynamics System Covariance Realism
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Evaluation of covariance for 238U cross sections
International Nuclear Information System (INIS)
Kawano, Toshihiko; Nakamura, Masahiro; Matsuda, Nobuyuki; Kanda, Yukinori
1995-01-01
Covariances of 238 U are generated using analytic functions for representation of the cross sections. The covariances of the (n,2n) and (n,3n) reactions are derived with a spline function, while the covariances of the total and the inelastic scattering cross section are estimated with a linearized nuclear model calculation. (author)
Zeng, Rongping; Petrick, Nicholas; Gavrielides, Marios A; Myers, Kyle J
2011-10-07
Multi-slice computed tomography (MSCT) scanners have become popular volumetric imaging tools. Deterministic and random properties of the resulting CT scans have been studied in the literature. Due to the large number of voxels in the three-dimensional (3D) volumetric dataset, full characterization of the noise covariance in MSCT scans is difficult to tackle. However, as usage of such datasets for quantitative disease diagnosis grows, so does the importance of understanding the noise properties because of their effect on the accuracy of the clinical outcome. The goal of this work is to study noise covariance in the helical MSCT volumetric dataset. We explore possible approximations to the noise covariance matrix with reduced degrees of freedom, including voxel-based variance, one-dimensional (1D) correlation, two-dimensional (2D) in-plane correlation and the noise power spectrum (NPS). We further examine the effect of various noise covariance models on the accuracy of a prewhitening matched filter nodule size estimation strategy. Our simulation results suggest that the 1D longitudinal, 2D in-plane and NPS prewhitening approaches can improve the performance of nodule size estimation algorithms. When taking into account computational costs in determining noise characterizations, the NPS model may be the most efficient approximation to the MSCT noise covariance matrix.
Separation of Correlated Astrophysical Sources Using Multiple-Lag Data Covariance Matrices
Directory of Open Access Journals (Sweden)
Baccigalupi C
2005-01-01
Full Text Available This paper proposes a new strategy to separate astrophysical sources that are mutually correlated. This strategy is based on second-order statistics and exploits prior information about the possible structure of the mixing matrix. Unlike ICA blind separation approaches, where the sources are assumed mutually independent and no prior knowledge is assumed about the mixing matrix, our strategy allows the independence assumption to be relaxed and performs the separation of even significantly correlated sources. Besides the mixing matrix, our strategy is also capable to evaluate the source covariance functions at several lags. Moreover, once the mixing parameters have been identified, a simple deconvolution can be used to estimate the probability density functions of the source processes. To benchmark our algorithm, we used a database that simulates the one expected from the instruments that will operate onboard ESA's Planck Surveyor Satellite to measure the CMB anisotropies all over the celestial sphere.
Covariant, chirally symmetric, confining model of mesons
International Nuclear Information System (INIS)
Gross, F.; Milana, J.
1991-01-01
We introduce a new model of mesons as quark-antiquark bound states. The model is covariant, confining, and chirally symmetric. Our equations give an analytic solution for a zero-mass pseudoscalar bound state in the case of exact chiral symmetry, and also reduce to the familiar, highly successful nonrelativistic linear potential models in the limit of heavy-quark mass and lightly bound systems. In this fashion we are constructing a unified description of all the mesons from the π through the Υ. Numerical solutions for other cases are also presented
Cosmology of a covariant Galilean field.
De Felice, Antonio; Tsujikawa, Shinji
2010-09-10
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
Covariant differential complexes of quantum linear groups
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1993-01-01
We consider the possible covariant external algebra structures for Cartan's 1-forms (Ω) on G L q (N) and S L q (N). Our starting point is that Ω s realize an adjoint representation of quantum group and all monomials of Ω s possess the unique ordering. For the obtained external algebras we define the differential mapping d possessing the usual nilpotence condition, and the generally deformed version of Leibnitz rules. The status of the known examples of G L q (N)-differential calculi in the proposed classification scheme and the problems of S L q (N)-reduction are discussed. (author.). 26 refs
Minimal covariant observables identifying all pure states
Energy Technology Data Exchange (ETDEWEB)
Carmeli, Claudio, E-mail: claudio.carmeli@gmail.com [D.I.M.E., Università di Genova, Via Cadorna 2, I-17100 Savona (Italy); I.N.F.N., Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku (Finland); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); I.N.F.N., Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)
2013-09-02
It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d−4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.
Linear Covariance Analysis and Epoch State Estimators
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Agnostic Estimation of Mean and Covariance
Lai, Kevin A.; Rao, Anup B.; Vempala, Santosh
2016-01-01
We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\\mathbb{R}^n$, in the presence of an $\\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when $\\eta$ fraction of data is adversarially corrupted, agn...
On the Galilean covariance of classical mechanics
International Nuclear Information System (INIS)
Horzela, A.; Kapuscik, E.; Kempczynski, J.; Joint Inst. for Nuclear Research, Dubna
1991-08-01
A Galilean covariant approach to classical mechanics of a single interacting particle is described. In this scheme constitutive relations defining forces are rejected and acting forces are determined by some fundamental differential equations. It is shown that total energy of the interacting particle transforms under Galilean transformations differently from the kinetic energy. The statement is illustrated on the exactly solvable examples of the harmonic oscillator and the case of constant forces and also, in the suitable version of the perturbation theory, for the anharmonic oscillator. (author)
Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.
Mazumder, Rahul; Hastie, Trevor
2012-03-01
We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.
Meier, Timothy B; Wildenberg, Joseph C; Liu, Jingyu; Chen, Jiayu; Calhoun, Vince D; Biswal, Bharat B; Meyerand, Mary E; Birn, Rasmus M; Prabhakaran, Vivek
2012-01-01
Parallel Independent Component Analysis (para-ICA) is a multivariate method that can identify complex relationships between different data modalities by simultaneously performing Independent Component Analysis on each data set while finding mutual information between the two data sets. We use para-ICA to test the hypothesis that spatial sub-components of common resting state networks (RSNs) covary with specific behavioral measures. Resting state scans and a battery of behavioral indices were collected from 24 younger adults. Group ICA was performed and common RSNs were identified by spatial correlation to publically available templates. Nine RSNs were identified and para-ICA was run on each network with a matrix of behavioral measures serving as the second data type. Five networks had spatial sub-components that significantly correlated with behavioral components. These included a sub-component of the temporo-parietal attention network that differentially covaried with different trial-types of a sustained attention task, sub-components of default mode networks that covaried with attention and working memory tasks, and a sub-component of the bilateral frontal network that split the left inferior frontal gyrus into three clusters according to its cytoarchitecture that differentially covaried with working memory performance. Additionally, we demonstrate the validity of para-ICA in cases with unbalanced dimensions using simulated data.
Determination of covariant Schwinger terms in anomalous gauge theories
International Nuclear Information System (INIS)
Kelnhofer, G.
1991-01-01
A functional integral method is used to determine equal time commutators between the covariant currents and the covariant Gauss-law operators in theories which are affected by an anomaly. By using a differential geometrical setup we show how the derivation of consistent- and covariant Schwinger terms can be understood on an equal footing. We find a modified consistency condition for the covariant anomaly. As a by-product the Bardeen-Zumino functional, which relates consistent and covariant anomalies, can be interpreted as connection on a certain line bundle over all gauge potentials. Finally the commutator anomalies are calculated for the two- and four dimensional case. (Author) 13 refs
ERRORJ. Covariance processing code system for JENDL. Version 2
International Nuclear Information System (INIS)
Chiba, Gou
2003-09-01
ERRORJ is the covariance processing code system for Japanese Evaluated Nuclear Data Library (JENDL) that can produce group-averaged covariance data to apply it to the uncertainty analysis of nuclear characteristics. ERRORJ can treat the covariance data for cross sections including resonance parameters as well as angular distributions and energy distributions of secondary neutrons which could not be dealt with by former covariance processing codes. In addition, ERRORJ can treat various forms of multi-group cross section and produce multi-group covariance file with various formats. This document describes an outline of ERRORJ and how to use it. (author)
Piecewise linear regression splines with hyperbolic covariates
International Nuclear Information System (INIS)
Cologne, John B.; Sposto, Richard
1992-09-01
Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)
Hierarchical multivariate covariance analysis of metabolic connectivity.
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-12-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).
Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data
International Nuclear Information System (INIS)
Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M
2006-01-01
The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We
General-Covariant Quantum Mechanics of Dirac Particle in Curved Space-Times
International Nuclear Information System (INIS)
Tagirov, Eh.A.
1994-01-01
A general covariant analog of the standard non-relativistic Quantum Mechanics with relativistic corrections in normal geodesic frames in the general Riemannian space-time is constructed for the Dirac particle. Not only the Pauli equation with hermitian Hamiltonian and the pre-Hilbert structure of space of its solutions but also the matrix elements of hermitian operators of momentum, (curvilinear) spatial coordinates and spin of the particle are deduced as general-covariant asymptotic approximation in c -2 , c being the velocity of light, to their naturally determined general-relativistic pre images. It is shown that the Hamiltonian in the Pauli equation originated by the Dirac equation is unitary equivalent to the operator of energy, originated by the metric energy-momentum tensor of the spinor field. Commutation and other properties of the observables connected with the considered change of geometrical background of Quantum Mechanics are briefly discussed. 7 refs
International Nuclear Information System (INIS)
Boehmer, Bertram
2000-01-01
Results of estimation of the covariance matrix of the neutron spectrum in the WWER-1000 reactor cavity and pressure vessel positions are presented. Two-dimensional calculations with the discrete ordinates transport code DORT in r-theta and r-z-geometry used to determine the neutron group spectrum covariances including gross-correlations between interesting positions. The new Russian ABBN-93 data set and CONSYST code used to supply all transport calculations with group neutron data. All possible sources of uncertainties namely caused by the neutron gross sections, fission sources, geometrical dimensions and material densities considered, whereas the uncertainty of the calculation method was considered negligible in view of the available precision of Monte Carlo simulation used for more precise evaluation of the neutron fluence. (Authors)
Assessment of the Gaussian Covariance Approximation over an Earth-Asteroid Encounter Period
Mattern, Daniel W.
2017-01-01
In assessing the risk an asteroid may pose to the Earth, the asteroids state is often predicted for many years, often decades. Only by accounting for the asteroids initial state uncertainty can a measure of the risk be calculated. With the asteroids state uncertainty growing as a function of the initial velocity uncertainty, orbit velocity at the last state update, and the time from the last update to the epoch of interest, the asteroids position uncertainties can grow to many times the size of the Earth when propagated to the encounter risk corridor. This paper examines the merits of propagating the asteroids state covariance as an analytical matrix. The results of this study help to bound the efficacy of applying different metrics for assessing the risk an asteroid poses to the Earth. Additionally, this work identifies a criterion for when different covariance propagation methods are needed to continue predictions after an Earth-encounter period.
Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.
Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin
2016-10-10
We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.
Random matrix theory for heavy-tailed time series
DEFF Research Database (Denmark)
Heiny, Johannes
2017-01-01
This paper is a review of recent results for large random matrices with heavy-tailed entries. First, we outline the development of and some classical results in random matrix theory. We focus on large sample covariance matrices, their limiting spectral distributions, the asymptotic behavior...
Computing more proper covariances of energy dependent nuclear data
International Nuclear Information System (INIS)
Vanhanen, R.
2016-01-01
Highlights: • We present conditions for covariances of energy dependent nuclear data to be proper. • We provide methods to detect non-positive and inconsistent covariances in ENDF-6 format. • We propose methods to find nearby more proper covariances. • The methods can be used as a part of a quality assurance program. - Abstract: We present conditions for covariances of energy dependent nuclear data to be proper in the sense that the covariances are positive, i.e., its eigenvalues are non-negative, and consistent with respect to the sum rules of nuclear data. For the ENDF-6 format covariances we present methods to detect non-positive and inconsistent covariances. These methods would be useful as a part of a quality assurance program. We also propose methods that can be used to find nearby more proper energy dependent covariances. These methods can be used to remove unphysical components, while preserving most of the physical components. We consider several different senses in which the nearness can be measured. These methods could be useful if a re-evaluation of improper covariances is not feasible. Two practical examples are processed and analyzed. These demonstrate some of the properties of the methods. We also demonstrate that the ENDF-6 format covariances of linearly dependent nuclear data should usually be encoded with the derivation rules.
Development of covariance date for fast reactor cores. 3
International Nuclear Information System (INIS)
Shibata, Keiichi; Hasegawa, Akira
1999-03-01
Covariances have been estimated for nuclear data contained in JENDL-3.2. As for Cr and Ni, the physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. In a case where evaluated data were based on experimental data, the covariances were estimated from the same experimental data. For cross section that had been evaluated by nuclear model calculations, the same model was applied to generate the covariances. The covariances obtained were compiled into ENDF-6 format files. The covariances, which had been prepared by the previous fiscal year, were re-examined, and some improvements were performed. Parts of Fe and 235 U covariances were updated. Covariances of nu-p and nu-d for 241 Pu and of fission neutron spectra for 233,235,238 U and 239,240 Pu were newly added to data files. (author)
Anomalous current from the covariant Wigner function
Prokhorov, George; Teryaev, Oleg
2018-04-01
We consider accelerated and rotating media of weakly interacting fermions in local thermodynamic equilibrium on the basis of kinetic approach. Kinetic properties of such media can be described by covariant Wigner function incorporating the relativistic distribution functions of particles with spin. We obtain the formulae for axial current by summation of the terms of all orders of thermal vorticity tensor, chemical potential, both for massive and massless particles. In the massless limit all the terms of fourth and higher orders of vorticity and third order of chemical potential and temperature equal zero. It is shown, that axial current gets a topological component along the 4-acceleration vector. The similarity between different approaches to baryon polarization is established.
Covariant non-commutative space–time
Directory of Open Access Journals (Sweden)
Jonathan J. Heckman
2015-05-01
Full Text Available We introduce a covariant non-commutative deformation of 3+1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space–time isometries. The non-commutative algebra is defined on space–times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so(5,1, while for AdS4 it assembles into so(4,2. The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.
Covariant entropy bound and loop quantum cosmology
International Nuclear Information System (INIS)
Ashtekar, Abhay; Wilson-Ewing, Edward
2008-01-01
We examine Bousso's covariant entropy bound conjecture in the context of radiation filled, spatially flat, Friedmann-Robertson-Walker models. The bound is violated near the big bang. However, the hope has been that quantum gravity effects would intervene and protect it. Loop quantum cosmology provides a near ideal setting for investigating this issue. For, on the one hand, quantum geometry effects resolve the singularity and, on the other hand, the wave function is sharply peaked at a quantum corrected but smooth geometry, which can supply the structure needed to test the bound. We find that the bound is respected. We suggest that the bound need not be an essential ingredient for a quantum gravity theory but may emerge from it under suitable circumstances.
Nonparametric Bayesian models for a spatial covariance.
Reich, Brian J; Fuentes, Montserrat
2012-01-01
A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.
Covariant Derivatives and the Renormalization Group Equation
Dolan, Brian P.
The renormalization group equation for N-point correlation functions can be interpreted in a geometrical manner as an equation for Lie transport of amplitudes in the space of couplings. The vector field generating the diffeomorphism has components given by the β functions of the theory. It is argued that this simple picture requires modification whenever any one of the points at which the amplitude is evaluated becomes close to any other. This modification necessitates the introduction of a connection on the space of couplings and new terms appear in the renormalization group equation involving covariant derivatives of the β function and the curvature associated with the connection. It is shown how the connection is related to the operator product expansion coefficients, but there remains an arbitrariness in its definition.
Covariant formulation of scalar-torsion gravity
Hohmann, Manuel; Järv, Laur; Ualikhanova, Ulbossyn
2018-05-01
We consider a generalized teleparallel theory of gravitation, where the action contains an arbitrary function of the torsion scalar and a scalar field, f (T ,ϕ ) , thus encompassing the cases of f (T ) gravity and a nonminimally coupled scalar field as subclasses. The action is manifestly Lorentz invariant when besides the tetrad one allows for a flat but nontrivial spin connection. We derive the field equations and demonstrate how the antisymmetric part of the tetrad equations is automatically satisfied when the spin connection equation holds. The spin connection equation is a vital part of the covariant formulation, since it determines the spin connection associated with a given tetrad. We discuss how the spin connection equation can be solved in general and provide the cosmological and spherically symmetric examples. Finally, we generalize the theory to an arbitrary number of scalar fields.
Introduction to covariant formulation of superstring (field) theory
International Nuclear Information System (INIS)
Anon.
1987-01-01
The author discusses covariant formulation of superstring theories based on BRS invariance. New formulation of superstring was constructed by Green and Schwarz in the light-cone gauge first and then a covariant action was discovered. The covariant action has some interesting geometrical interpretation, however, covariant quantizations are difficult to perform because of existence of local supersymmetries. Introducing extra variables into the action, a modified action has been proposed. However, it would be difficult to prescribe constraints to define a physical subspace, or to reproduce the correct physical spectrum. Hence the old formulation, i.e., the Neveu-Schwarz-Ramond (NSR) model for covariant quantization is used. The author begins by quantizing the NSR model in a covariant way using BRS charges. Then the author discusses the field theory of (free) superstring
COVARIANCE ESTIMATION USING CONJUGATE GRADIENT FOR 3D CLASSIFICATION IN CRYO-EM.
Andén, Joakim; Katsevich, Eugene; Singer, Amit
2015-04-01
Classifying structural variability in noisy projections of biological macromolecules is a central problem in Cryo-EM. In this work, we build on a previous method for estimating the covariance matrix of the three-dimensional structure present in the molecules being imaged. Our proposed method allows for incorporation of contrast transfer function and non-uniform distribution of viewing angles, making it more suitable for real-world data. We evaluate its performance on a synthetic dataset and an experimental dataset obtained by imaging a 70S ribosome complex.
Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.
2004-01-01
One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the
Fermionic covariant prolongation structure theory for supernonlinear evolution equation
International Nuclear Information System (INIS)
Cheng Jipeng; Wang Shikun; Wu Ke; Zhao Weizhong
2010-01-01
We investigate the superprincipal bundle and its associated superbundle. The super(nonlinear)connection on the superfiber bundle is constructed. Then by means of the connection theory, we establish the fermionic covariant prolongation structure theory of the supernonlinear evolution equation. In this geometry theory, the fermionic covariant fundamental equations determining the prolongation structure are presented. As an example, the supernonlinear Schroedinger equation is analyzed in the framework of this fermionic covariant prolongation structure theory. We obtain its Lax pairs and Baecklund transformation.
Some remarks on general covariance of quantum theory
International Nuclear Information System (INIS)
Schmutzer, E.
1977-01-01
If one accepts Einstein's general principle of relativity (covariance principle) also for the sphere of microphysics (quantum, mechanics, quantum field theory, theory of elemtary particles), one has to ask how far the fundamental laws of traditional quantum physics fulfil this principle. Attention is here drawn to a series of papers that have appeared during the last years, in which the author criticized the usual scheme of quantum theory (Heisenberg picture, Schroedinger picture etc.) and presented a new foundation of the basic laws of quantum physics, obeying the 'principle of fundamental covariance' (Einstein's covariance principle in space-time and covariance principle in Hilbert space of quantum operators and states). (author)
Summary report of technical meeting on neutron cross section covariances
International Nuclear Information System (INIS)
Trkov, A.; Smith, D.L.; Capote Noy, R.
2011-01-01
A summary is given of the Technical Meeting on Neutron Cross Section Covariances. The meeting goal was to assess covariance data needs and recommend appropriate methodologies to address those needs. Discussions on covariance data focused on three general topics: 1) Resonance and unresolved resonance regions; 2) Fast neutron region; and 3) Users' perspective: benchmarks' uncertainty and reactor dosimetry. A number of recommendations for further work were generated and the important work that remains to be done in the field of covariances was identified. (author)
Bias Correction in the Dynamic Panel Data Model with a Nonscalar Disturbance Covariance Matrix
Bun, M.J.G.
2003-01-01
Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J.
Improving the ensemble optimization method through covariance matrix adaptation (CMA-EnOpt)
Fonseca, R.M.; Leeuwenburgh, O.; Hof, P.M.J. van den; Jansen, J.D.
2013-01-01
Ensemble Optimization (EnOpt) is a rapidly emerging method for reservoir model based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current implementations of EnOpt use a Gaussian ensemble with a
Improving the ensemble-optimization method through covariance-matrix adaptation
Fonseca, R.M.; Leeuwenburgh, O.; Hof, P.M.J. van den; Jansen, J.D.
2015-01-01
Ensemble optimization (referred to throughout the remainder of the paper as EnOpt) is a rapidly emerging method for reservoirmodel-based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current