Nucleon quark distributions in a covariant quark-diquark model
Cloet, I.C. [Special Research Centre for the Subatomic Structure of Matter and Department of Physics and Mathematical Physics, University of Adelaide, SA 5005 (Australia) and Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)]. E-mail: icloet@physics.adelaide.edu.au; Bentz, W. [Department of Physics, School of Science, Tokai University, Hiratsuka-shi, Kanagawa 259-1292 (Japan)]. E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Thomas, A.W. [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)]. E-mail: awthomas@jlab.org
2005-08-18
Spin-dependent and spin-independent quark light-cone momentum distributions and structure functions are calculated for the nucleon. We utilize a modified Nambu-Jona-Lasinio model in which confinement is simulated by eliminating unphysical thresholds for nucleon decay into quarks. The nucleon bound state is obtained by solving the Faddeev equation in the quark-diquark approximation, where both scalar and axial-vector diquark channels are included. We find excellent agreement between our model results and empirical data.
Transversity quark distributions in a covariant quark-diquark model
Cloet, I.C. [Physics Division, Argonne National Laboratory, Argonne, IL 60439-4843 (United States)], E-mail: icloet@anl.gov; Bentz, W. [Department of Physics, School of Science, Tokai University, Hiratsuka-shi, Kanagawa 259-1292 (Japan)], E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Thomas, A.W. [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States); College of William and Mary, Williamsburg, VA 23187 (United States)], E-mail: awthomas@jlab.org
2008-01-17
Transversity quark light-cone momentum distributions are calculated for the nucleon. We utilize a modified Nambu-Jona-Lasinio model in which confinement is simulated by eliminating unphysical thresholds for nucleon decay into quarks. The nucleon bound state is obtained by solving the relativistic Faddeev equation in the quark-diquark approximation, where both scalar and axial-vector diquark channels are included. Particular attention is paid to comparing our results with the recent experimental extraction of the transversity distributions by Anselmino et al. We also compare our transversity results with earlier spin-independent and helicity quark distributions calculated in the same approach.
Baryons in and beyond the quark-diquark model
Eichmann, G.; Alkofer, R.; Krassnigg, A.; Fischer, C. S.; Nicmorus, D.
2011-01-01
We examine the nucleon's electromagnetic form factors in a Poincare-covariant Faddeev framework. The three-quark core contributions to the form factors are obtained by employing a quark-diquark approximation. We implement the self-consistent solution for the quark-photon vertex from its inhomogeneous Bethe-Salpeter equation. We find that the resulting transverse parts which add to the Ball-Chiu vertex have no significant impact on nucleon magnetic moments. The current-quark mass evolution of the form factors agrees with results from lattice QCD.
Quark-diquark model description for double charm baryons
Majethiya, A.; Patel, B.; Vinodkumar, P. C.
2010-01-01
We report here the mass spectrum and magnetic moments of ccq(q (implied by) u, d, s) systems in the potential model framework by assuming the inter-quark potential as the colour coulomb plus power form with power index ν varying between 0.1 to 2.0. Here the two charm quarks are considered for the diquark states. The conventional one gluon exchange interaction has been employed to get the hyperfine and the fine structure between different states. We have predicted many low-lying states whose experimental verification can exclusively support the quark-diquark structure of the baryons. (authors)
Gravitational form factors and angular momentum densities in light-front quark-diquark model
Kumar, Narinder [Indian Institute of Technology Kanpur, Department of Physics, Kanpur (India); Mondal, Chandan [Chinese Academy of Sciences, Institute of Modern Physics, Lanzhou (China); Sharma, Neetika [I K Gujral Punjab Technical University, Department of Physical Sciences, Jalandhar, Punjab (India); Panjab University, Department of Physics, Chandigarh (India)
2017-12-15
We investigate the gravitational form factors (GFFs) and the longitudinal momentum densities (p{sup +} densities) for proton in a light-front quark-diquark model. The light-front wave functions are constructed from the soft-wall AdS/QCD prediction. The contributions from both the scalar and the axial vector diquarks are considered here. The results are compared with the consequences of a parametrization of nucleon generalized parton distributions (GPDs) in the light of recent MRST measurements of parton distribution functions (PDFs) and a soft-wall AdS/QCD model. The spatial distribution of angular momentum for up and down quarks inside the nucleon has been presented. At the density level, we illustrate different definitions of angular momentum explicitly for an up and down quark in the light-front quark-diquark model inspired by AdS/QCD. (orig.)
Polarized heavy baryon production in quark-diquark model considering two different scenarios
Moosavi Nejad, S.M. [Yazd University, Faculty of Physics, Yazd (Iran, Islamic Republic of); Institute for Research in Fundamental Sciences (IPM), School of Particles and Accelerators, Tehran (Iran, Islamic Republic of); Delpasand, M. [Yazd University, Faculty of Physics, Yazd (Iran, Islamic Republic of)
2017-09-15
At sufficiently large transverse momentum, the dominant production mechanism for heavy baryons is actually the fragmentation. In this work, we first study the direct fragmentation of a heavy quark into the unpolarized triply heavy baryons in the leading order of perturbative QCD. In a completely different approach, we also analyze the two-stage fragmentation of a heavy quark into a scalar diquark followed by the fragmentation of such a scalar diquark into a triply heavy baryon: quark-diquark model of baryons. The results of this model are in acceptable agreement with those obtained through a full perturbative regime. Relying on the quark-diquark model and considering two different scenarios we determine the spin-dependent fragmentation functions of polarized heavy baryons in such a way that a vector or a pseudoscalar heavy diquark is an intermediate particle between the initial heavy quark and the final state baryon. (orig.)
Leading Twist TMDs in a Light-Front Quark-Diquark Model for Proton
Maji, Tanmay; Chakrabarti, Dipankar
2018-05-01
We present p_{\\perp } variation (fixed x) of the leading-twist T-even transverse momentum dependent parton distributions (TMDs) of a proton in a light-front quark-diquark model at μ ^2=2.4 and 20 GeV^2. The quark densities for unpolarized and transversely polarized proton are also presented. We observe a Soffer bound for TMDs in this model.
Excited State Contributions to the Heavy Baryon Fragmentation Functions in a Quark-Diquark Model
Adamov, A D; Goldstein, Gary R.
2001-01-01
Spin dependent fragmentation functions for heavy flavor quarks to fragment into heavy baryons are calculated in a quark-diquark model. The production of intermediate spin 1/2 and 3/2 excited states is explicity included. The resulting $\\Lambda_b$ production rate and polarization at LEP energies are in agreement with experiment. The $\\Lambda_c$ and $\\Xi_c$ functions are also obtained. The spin independent $f_1(z)$ is compared to data. The integrated values for production rates agree with the data.
Charmonium decays into proton-antiproton and a quark-diquark model for the nucleon
Anselmino, M.; Forte, S
1990-01-01
A quark-diquark model of the nucleon is applied to a perturbative QCD description of several decays of the charmonium family: η sub(c), χ sub(c0,c1,c2), → p sup(-)p. Both experimental data and theoretical considerations are used to fix the parameters of the model. Decay rates for the χ's in good agreement with the existing experimental results may be obtained. The values for the decay of the η sub(c) are found instead to be much smaller than the data. Our formalism provides a general framework for the computation of the decay amplitudes of any sup(25+1)L sub(j), C = +1, heavy quarkonium state into hadron-antihadron. The explicit expression for the decay into two photons is also given. (author)
Hadronization of quark-diquark model for nucleon structure and nuclear force by path integral
Nagata, Keitaro
2003-01-01
One of the central issues of the hadron physics is how to interpret the properties and the origin of nuclear force. Nuclear force is in principle the manifestation of dynamics of quarks and gluons but no trial has been successful yet in describing the nuclear force by using QCD, the fundamental theory of the strong interactions. Phenomenon related to the chiral symmetry and the spontaneous breaking of the chiral symmetry is one of the important phenomena for the understanding of hadron physics. Nambu-Jona-Lasinio (NJL) model is one of the quark system models to explain the phenomena concerning the chiral symmetry. Although the method to deduce the Lagrangian describing mesons by applying the path integral to NJL model has been well known as the bosonization, it has been difficult to extend it to baryons because baryons are three-body system. In this paper, a method is reported to deduce Lagrangian which describes baryon-meson from quark-diquark Lagrangian by assuming that baryons are the bound states of quark and diquark. (S. Funahashi)
Quark-diquark approximation of the three-quark structure of baryons in the quark confinement model
Efimov, G.V.; Ivanov, M.A.; Lyubovitskij, V.E.
1990-01-01
Octet (1 + /2) and decuplet (3 + /2) of baryons as relativistic three-quark states are investigated in the quark confinement model (QCM), the relativistic quark model, based on some assumptions about hadronization and quark confinement. The quark-diquark approximation of the three-quark structure of baryons is proposed. In the framework of this approach the description of the main low-energy characteristics of baryons as magnetic moments, electromagnetic radii and form factors, ratio of axial and vector constants in semileptonic baryon octet decays, strong form factors and decay widths is given. The obtained results are in agreement with experimental data. 31 refs.; 4 figs.; 5 tabs
Meson and baryon production in K/sup +/ and. pi. /sup +/ beam jets and quark-diquark cascade model
Kinoshita, Kisei [Kagoshima Univ. (Japan). Faculty of Education; Noda, Hujio; Tashiro, Tsutomu
1982-11-01
A quark-diquark cascade model which includes flavor dependence and resonance effect is studied. The inclusive distributions of vector and pseudoscalar mesons and octet baryons and antibaryons in K/sup +/ and ..pi../sup +/ beam jets are analyzed. The contribution of decuplet baryons to the octet baryon spectra is very important in meson beam jet. The effects of the asymmetric u- and anti s-quark distributions in K/sup +/ and the SU(6)-symmetry breaking for the produced octet baryon are discussed in connection with the ..pi../sup +//K/sup +/ beam ratio and other data.
Quark diquark symmetry breaking
Souza, M.M. de
1980-01-01
Assuming the baryons are made of quark-diquark pairs, the wave functions for the 126 allowed ground states are written. The quark creation and annihilations operators are generalized to describe the quark-diquark structure in terms of a parameter σ. Assuming that all quark-quark interactions are mediated by gluons transforming like an octet of vector mesons, the effective Hamiltonian and the baryon masses as constraint equations for the elements of the mass matrix is written. The symmetry is the SU(6) sub(quark)x SU(21) sub(diquark) broken by quark-quark interactions respectively invariant under U(6), U(2) sub(spin), U(3) and also interactions transforming like the eighth and the third components of SU(3). In the limit of no quark-diquark structure (σ = 0), the ground state masses is titted to within 1% of the experimental data, except for the Δ(1232), where the error is almost 2%. Expanding the decuplet mass equations in terms of σ and keeping terms only up to the second order, this error is reduced to 67%. (Author) [pt
Energy dependence of the multiplicity analysis of quark-diquark jets
Biswal, K; Panda, A R; Parida, B K
1980-01-01
Under the assumption of hard scattering, multiplicity analysis of quark-diquark jets is made in a model analogous to the quark-cascade- jet production model developed earlier. In the present approach the diquark is treated as a coherent object consisting of the two quarks which remain after the hard scattering. This is assumed to produce a baryon and an antiquark in the first stage of its fragmentation. The resulting quark-antiquark pair then hadronises as per the cascade model. This picture of quark-diquark fragmentation is adequately supported by the observations made in recent ISR experiments at CERN. The above technique is applied to weak, electromagnetic and strong processes involving quark-diquark hadronisation in a unified manner and with fair agreement with the experimental results. (0 refs).
Tsurugai, T.
1987-01-01
Feynman x distributions and transverse momentum p T distributions for the inclusive reactions pp → h ± , π 0 , K-s 0 Λ 0 , anti Λ 0 , K* ± , Σ* ± + anything at 360 GeV/c are analyzed in terms of the quark-diquark fragmentation models. Comparison of the model predictions with inclusive data reveals that the model with diquarks can quantitatively describe all data. In particular for the baryon production such as pp → Λ 0 + anything, the model without diquarks shows serious discrepancies with the data. Using the quark-diquark fragmentation model, we have found that the primordial transverse momentum T > ≅ 0.6 GeV/c can well reproduce p T 2 distributions and the Feynman x-p T correlations. (author)
Quark-diquark approximation of the three-quark structure of a nucleon and the NN phase shifts
Efimov, G.V.; Ivanov, M.A.
1988-01-01
The quark-diquark approximations of the three-quark structure of a nucleon are considered in the framework of the quark confinement model (QCM) based on definite concepts of the hadronization and quark confinement. The static nucleon characteristics (magnetic moments, ratio G A /G V and strong meson-nucleon coupling constants) are calculated. The behaviour of the electromagnetic and strong nucleon form factors is obtained at the low energy (0≤0 2 =-q 2 2 , where q is a transfer momentum). The one-boson exchange potential is constructed and the NN-phase-shifts are computed. Our results are compared with experiment and the Bonn potential model. 45 refs.; 7 figs.; 3 tabs
Effective quark-diquark supersymmetry an algebraic approach
Catto, S.
1989-01-01
Effective hadronic supersymmetries and color algebra, where extended Miyazawa U(6/21) supersymmetry between mesons and baryons are derived from QCD under some assumptions and within some approximation, also using a dynamical suppression of color-symmetric states. This shows the hadronic origin of supersymmetry as well as the underlying structure of exceptional algebras to the quark model. Supergroups, and infinite groups like Virasoro algebra, then emerge as useful descriptions of certain properties of the hadronic spectrum. Applications to exotic mesons and baryons are discussed
Asai, Makoto
1986-01-01
Using the European Hybrid spectrometer (EHS) system, we have investigated the properties of the four-prong 'high mass' diffraction dissociation process in the exclusive processes pp → pX, where X represents pπ + π - nπ 0 (n = 0, 1, 2). We present experimental evidences that Pomeron couples with a single valence quark in the incident proton and that the other two valence quarks in the proton behaves as the spectator diquark. We also show that most of the baryons are produced from the spectator diquark system in these processes. The p t suppression is also shown in the Gottfried-Jackson frame, the frame in which the excited system composed of Pomeron and the incident proton is at rest. Characteristic features in the hadronization of this process are very much similar to those of quark-diquark fragmentation in lepton-hardron deep inelastic scattering. (author)
Magnetic moments of triply heavy baryons in quark-diquark model
Thakkar, Kaushal; Majethiya, Ajay; Vinodkumar, P.C.
2016-01-01
Along with the well-established triply flavoured (uuu) and strange (sss) baryons, QCD predicts similar states made up of charm quarks, the triply-charmed baryon, ccc and bottom quarks, the triply-bottom baryon, bbb. Such a state has yet to be observed experimentally. After the observation of the doubly charmed baryon by the SELEX group, it is expected that the triply heavy flavour baryonic state may be in the offing very soon. Though considerable amount of data on the properties of the singly-heavy baryons are available in literature, only sparse attention has been paid to the spectroscopy of double and triple-heavy flavour baryons, perhaps mainly due to the lack of experimental incentives
Williams, A.G.
1998-01-01
There is a need for covariant solutions of bound state equations in order to construct realistic QCD based models of mesons and baryons. Furthermore, we ideally need to know the structure of these bound states in all kinematical regimes, which makes a direct solution in Minkowski space (without any 3-dimensional reductions) desirable. The Bethe-Salpeter equation (BSE) for bound states in scalar theories is reformulated and solved for arbitrary scattering kernels in terms of a generalized spectral representation directly in Minkowski space. This differs from the conventional Euclidean approach, where the BSE can only be solved in ladder approximation after a Wick rotation. An application of covariant Bethe-Salpeter solutions to a quark-diquark model of the nucleon is also briefly discussed. (orig.)
Properties of Doubly Heavy Baryons in the Relativistic Quark Model
Ebert, D.; Faustov, R.N.; Galkin, V.O.; Martynenko, A.P.
2005-01-01
Mass spectra and semileptonic decay rates of baryons consisting of two heavy (b or c) and one light quark are calculated in the framework of the relativistic quark model. The doubly heavy baryons are treated in the quark-diquark approximation. The ground and excited states of both the diquark and quark-diquark bound systems are considered. The quark-diquark potential is constructed. The light quark is treated completely relativistically, while the expansion in the inverse heavy-quark mass is used. The weak transition amplitudes of heavy diquarks bb and bc going, respectively, to bc and cc are explicitly expressed through the overlap integrals of the diquark wave functions in the whole accessible kinematic range. The relativistic baryon wave functions of the quark-diquark bound system are used for the calculation of the decay matrix elements, the Isgur-Wise function, and decay rates in the heavy-quark limit
Multivariate covariance generalized linear models
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Nucleon Mass from a Covariant Three-Quark Faddeev Equation
Eichmann, G.; Alkofer, R.; Krassnigg, A.; Nicmorus, D.
2010-01-01
We report the first study of the nucleon where the full Poincare-covariant structure of the three-quark amplitude is implemented in the Faddeev equation. We employ an interaction kernel which is consistent with contemporary studies of meson properties and aspects of chiral symmetry and its dynamical breaking, thus yielding a comprehensive approach to hadron physics. The resulting current-mass evolution of the nucleon mass compares well with lattice data and deviates only by ∼5% from the quark-diquark result obtained in previous studies.
Modeling Covariance Breakdowns in Multivariate GARCH
Jin, Xin; Maheu, John M
2014-01-01
This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and impl...
Covariant solution of the three-quark problem in quantum ﬁeld theory: the nucleon
Nicmorus D.
2010-04-01
Full Text Available We provide details on a recent solution of the nucleon’s covariant Faddeev equation in an explicit three-quark approach. The full Poincaré-covariant structure of the three-quark amplitude is implemented through an orthogonal basis obtained from a partial-wave decomposition. We employ a rainbow-ladder gluon exchange kernel which allows for a comparison with meson Bethe-Salpeter and baryon quark-diquark studies. We describe the construction of the three-quark amplitude in full detail and compare it to a notation widespread in recent publications. Finally, we discuss ﬁrst numerical results for the nucleon’s amplitude.
Efimov, G.V.; Ivanov, M.A.; Rusetskij, A.G.
1989-01-01
The S-wave πN-scattering lengths and the (pπ - )-atom lifetime are in the quark confinement model. Nucleon is treated as a quark-diquark system. The fulfillment of the Weinberg-Tomozawa relations is checked. The agreement is achieved with the experiment and with the results obtained within other approaches. 32 refs.; 5 figs.; 2 tabs
Anselmino, M.; Soares, J.; Caruso, F.; Joffily, S.
1991-01-01
The η c decay into proton-antiproton cannot be explained by a lowest order perturbative QCD quark scheme. Trying to improve a previous result where diquarks were also considered as nucleon's constituents, the contribution of the spin-flip transition between scalar and vector diquarks inside the nucleon is computed and is shown to be strictly zero. This result excludes the possibility of understanding why this decay is experimentally observed with a branching ratio much greater than those of other charmonium decays into the same final state, X0,1,2 → pp-bar, successfully described by pQCD in terms of quark and diquark components of the protons. A theoretical explanation of this decay rate is then still lacking and it is suggested that pseudoscalar glueballs might play an important role in solving the puzzle. The experimental results are also briefly discussed. (author)
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank
Covariant, chirally symmetric, confining model of mesons
Gross, F.; Milana, J.
1991-01-01
We introduce a new model of mesons as quark-antiquark bound states. The model is covariant, confining, and chirally symmetric. Our equations give an analytic solution for a zero-mass pseudoscalar bound state in the case of exact chiral symmetry, and also reduce to the familiar, highly successful nonrelativistic linear potential models in the limit of heavy-quark mass and lightly bound systems. In this fashion we are constructing a unified description of all the mesons from the π through the Υ. Numerical solutions for other cases are also presented
A special covariance structure for random coefficient models with both between and within covariates
Riedel, K.S.
1990-07-01
We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak
2017-01-01
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix
Nonparametric Bayesian models for a spatial covariance.
Reich, Brian J; Fuentes, Montserrat
2012-01-01
A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.
The phases of isospin-asymmetric matter in the two-flavor NJL model
Lawley, S. [Special Research Centre for the Subatomic Structure of Matter, University of Adelaide, Adelaide, SA 5005 (Australia) and Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)]. E-mail: slawley@jlab.org; Bentz, W. [Department of Physics, School of Science, Tokai University, Hiratsuka-shi, Kanagawa 259-1292 (Japan); Thomas, A.W. [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)
2006-01-19
We investigate the phase diagram of isospin-asymmetric matter at T=0 in the two-flavor Nambu-Jona-Lasinio model. Our approach describes the single nucleon as a confined quark-diquark state, the saturation properties of nuclear matter at normal densities, and the phase transition to normal or color superconducting quark matter at higher densities. The resulting equation of state of charge-neutral matter and the structure of compact stars are discussed.
Validity of covariance models for the analysis of geographical variation
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained att...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalize...
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Modelling the Covariance Structure in Marginal Multivariate Count Models
Bonat, W. H.; Olivero, J.; Grande-Vega, M.
2017-01-01
The main goal of this article is to present a flexible statistical modelling framework to deal with multivariate count data along with longitudinal and repeated measures structures. The covariance structure for each response variable is defined in terms of a covariance link function combined...... be used to indicate whether there was statistical evidence of a decline in blue duikers and other species hunted during the study period. Determining whether observed drops in the number of animals hunted are indeed true is crucial to assess whether species depletion effects are taking place in exploited...... with a matrix linear predictor involving known matrices. In order to specify the joint covariance matrix for the multivariate response vector, the generalized Kronecker product is employed. We take into account the count nature of the data by means of the power dispersion function associated with the Poisson...
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
Partially linear varying coefficient models stratified by a functional covariate
Maity, Arnab; Huang, Jianhua Z.
2012-01-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Generalized Extreme Value model with Cyclic Covariate Structure ...
48
enhances the estimation of the return period; however, its application is ...... Cohn T A and Lins H F 2005 Nature's style: Naturally trendy; GEOPHYSICAL ..... Final non-stationary GEV models with covariate structures shortlisted based on.
Optimal covariance selection for estimation using graphical models
Vichik, Sergey; Oshman, Yaakov
2011-01-01
We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-01-01
-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters
Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)
Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis
We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...
Bayes factor covariance testing in item response models
Fox, J.P.; Mulder, J.; Sinharay, Sandip
2017-01-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning
Bayes Factor Covariance Testing in Item Response Models
Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip
2017-01-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning
Covariate selection for the semiparametric additive risk model
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...
Bayes Factor Covariance Testing in Item Response Models.
Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip
2017-12-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.
Merons in a generally covariant model with Gursey term
Akdeniz, K.G.; Smailagic, A.
1982-10-01
We study meron solutions of the generally covariant and Weyl invariant fermionic model with Gursey term. We find that, due to the presence of this term, merons can exist even without the cosmological constant. This is a new feature compared to previously studied models. (author)
Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice
Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.
2017-01-01
We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast
Modeling the Conditional Covariance between Stock and Bond Returns
P. de Goeij (Peter); W.A. Marquering (Wessel)
2002-01-01
textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for
A reduced covariant string model for the extrinsic string
Botelho, L.C.L.
1989-01-01
It is studied a reduced covariant string model for the extrinsic string by using Polyakov's path integral formalism. On the basis of this reduced model it is suggested that the extrinsic string has its critical dimension given by 13. Additionally, it is calculated in a simple way Poliakov's renormalization group law for the string rigidity coupling constants. (A.C.A.S.) [pt
Using Covariation Reasoning to Support Mathematical Modeling
Jacobson, Erik
2014-01-01
For many students, making connections between mathematical ideas and the real world is one of the most intriguing and rewarding aspects of the study of mathematics. In the Common Core State Standards for Mathematics (CCSSI 2010), mathematical modeling is highlighted as a mathematical practice standard for all grades. To engage in mathematical…
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
Identifying nonproportional covariates in the Cox model
Kraus, David
2008-01-01
Roč. 37, č. 4 (2008), s. 617-625 ISSN 0361-0926 R&D Projects: GA AV ČR(CZ) IAA101120604; GA MŠk(CZ) 1M06047; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * goodness of fit * proportional hazards assumption * time-varying coefficients Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.324, year: 2008
Globally covering a-priori regional gravity covariance models
D. Arabelos
2003-01-01
Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei
2017-11-08
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
Chiral phase transition in a covariant nonlocal NJL model
General, I.; Scoccola, N.N.
2001-01-01
The properties of the chiral phase transition at finite temperature and chemical potential are investigated within a nonlocal covariant extension of the NJL model based on a separable quark-quark interaction. We find that for low values of T the chiral transition is always of first order and, for finite quark masses, at certain end point the transition turns into a smooth crossover. Our predictions for the position of this point is similar, although somewhat smaller, than previous estimates. (author)
Some remarks on estimating a covariance structure model from a sample correlation matrix
Maydeu Olivares, Alberto; Hernández Estrada, Adolfo
2000-01-01
A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...
Partially linear varying coefficient models stratified by a functional covariate
Maity, Arnab
2012-10-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.
Statistical mechanics of learning orthogonal signals for general covariance models
Hoyle, David C
2010-01-01
Statistical mechanics techniques have proved to be useful tools in quantifying the accuracy with which signal vectors are extracted from experimental data. However, analysis has previously been limited to specific model forms for the population covariance C, which may be inappropriate for real world data sets. In this paper we obtain new statistical mechanical results for a general population covariance matrix C. For data sets consisting of p sample points in R N we use the replica method to study the accuracy of orthogonal signal vectors estimated from the sample data. In the asymptotic limit of N,p→∞ at fixed α = p/N, we derive analytical results for the signal direction learning curves. In the asymptotic limit the learning curves follow a single universal form, each displaying a retarded learning transition. An explicit formula for the location of the retarded learning transition is obtained and we find marked variation in the location of the retarded learning transition dependent on the distribution of population covariance eigenvalues. The results of the replica analysis are confirmed against simulation
Emergent gravity on covariant quantum spaces in the IKKT model
Steinacker, Harold C. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Vienna (Austria)
2016-12-30
We study perturbations of 4-dimensional fuzzy spheres as backgrounds in the IKKT or IIB matrix model. Gauge fields and metric fluctuations are identified among the excitation modes with lowest spin, supplemented by a tower of higher-spin fields. They arise from an internal structure which can be viewed as a twisted bundle over S{sup 4}, leading to a covariant noncommutative geometry. The linearized 4-dimensional Einstein equations are obtained from the classical matrix model action under certain conditions, modified by an IR cutoff. Some one-loop contributions to the effective action are computed using the formalism of string states.
Modeling higher twist contributions to deep inelastic scattering with diquarks
Anselmino, M.
1994-01-01
The most recent detailed data on the unpolarized nucleon structure functions allow a precise determination of higher twist contributions. Quark-quark correlations induced by color forces are expected to be a natural explanation for such effects; indeed, a quark-diquark picture of the nucleon, previously introduced in the description of several exclusive processes at intermediate Q 2 values, is found to model the proton higher twist data with great accuracy. The resulting parameters are consistent with the diquark properties suggested by other experimental and theoretical analyses. (author)
Modelling higher twist contributions to deep inelastic scattering with diquarks
Anselmino, M.; Caruso, F.; Penna Firme, A.; Soares, J.; Mello Neto, J.R.T. de
1994-08-01
The most recent detailed data on the unpolarized nucleon structure functions allow a precise determination of higher twist contributions. Quark-quark correlations induced by colour forces are expected to be a natural explanation for such effects: indeed, a quark-diquark picture of the nucleon, previously introduced in the description of several exclusive processes at intermediate Q 2 values, is found to model the proton higher twist data with great accuracy. The resulting parameters are consistent with the diquark properties suggested by other experimental and theoretical analyses. (author). 15 refs, 5 figs
The stability of nuclear matter in the Nambu-Jona-Lasinio model
Bentz, W. E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Thomas, A.W. E-mail: athomas@physics.adelaide.edu.au
2001-12-17
Using the Nambu-Jona-Lasinio model to describe the nucleon as a quark-diquark state, we discuss the stability of nuclear matter in a hybrid model for the ground state at finite nucleon density. It is shown that a simple extension of the model to simulate the effects of confinement leads to a scalar polarizability of the nucleon. This, in turn, leads to a less attractive effective interaction between the nucleons, helping to achieve saturation of the nuclear matter ground state. It is also pointed out that that the same effect naturally leads to a suppression of 'Z-graph' contributions with increasing scalar potential.
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
Ultracentrifuge separative power modeling with multivariate regression using covariance matrix
Migliavacca, Elder
2004-01-01
In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 460 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process variables, which significantly influence the δU values are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow rate F, cut θ and product line pressure P p . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heteroscedasticity with any explained regression model variable. The surface curves are made relating the separative power with the control variables F, θ and P p to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)
Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C
2017-08-01
One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.
The breaking of Bjorken scaling in the covariant parton model
Polkinghorne, J.C.
1976-01-01
Scale breaking is investigated in terms of a covariant parton model formulation of deep inelastic processes. It is shown that a consistent theory requires that the convergence properties of parton-hadron amplitudes should be modified as well as the parton being given form factors. Purely logarithmic violation is possible and the resulting model has many features in common with asymtotically free gauge theories. Behaviour at large and small ω and fixed q 2 is investigated. γW 2 should increase with q 2 at large ω and decrease with q 2 at small ω. Heuristic arguments are also given which suggest that the model would only lead to logarithmic modifications of dimensional counting results in purely hadronic deep scattering. (Auth.)
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
Monte Carlo simulations of hadronic fragmentation functions using the Nambu-Jona-Lasinio-jet model
Matevosyan, Hrayr H.; Thomas, Anthony W.; Bentz, Wolfgang
2011-01-01
The recently developed Nambu-Jona-Lasinio--jet model is used as an effective chiral quark theory to calculate the quark fragmentation functions to pions, kaons, nucleons, and antinucleons. The effects of the vector mesons ρ, K * , and φ on the production of secondary pions and kaons are included. The fragmentation processes to nucleons and antinucleons are described by using the quark-diquark picture, which has been shown to give a reasonable description of quark distribution functions. We incorporate effects of next-to-leading order in the Q 2 evolution, and compare our results with the empirical fragmentation functions.
The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.
Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J
2018-03-23
In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.
Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models
Liang, Yuli
2015-01-01
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data....
Structure of Pioncare covariant tensor operators in quantum mechanical models
Polyzou, W.N.; Klink, W.H.
1988-01-01
The structure of operators that transform covariantly in Poincare invariant quantum mechanical models is analyzed. These operators are shown to have an interaction dependence that comes from the geometry of the Poincare group. The operators can be expressed in terms of matrix elements in a complete set of eigenstates of the mass and spin operators associated with the dynamical representation of the Poincare group. The matrix elements are factored into geometrical coefficients (Clebsch--Gordan coefficients for the Poincare group) and invariant matrix elements. The geometrical coefficients are fixed by the transformation properties of the operator and the eigenvalue spectrum of the mass and spin. The invariant matrix elements, which distinguish between different operators with the same transformation properties, are given in terms of a set of invariant form factors. copyright 1988 Academic Press, Inc
Kashiwa, Kouji; Matsuzaki, Masayuki; Kouno, Hiroaki; Yahiro, Masanobu
2007-01-01
We study the interplay of the chiral and the color superconducting phase transition in an extended Nambu-Jona-Lasinio model with a multi-quark interaction that produces the nonlinear chiral-diquark coupling. We observe that this nonlinear coupling adds up coherently with the ω 2 interaction to either produce the chiral-color superconductivity coexistence phase or cancel each other depending on its sign. We discuss that a large coexistence region in the phase diagram is consistent with the quark-diquark picture for the nucleon whereas its smallness is the prerequisite for the applicability of the Ginzburg-Landau approach
Parametric Covariance Model for Horizon-Based Optical Navigation
Hikes, Jacob; Liounis, Andrew J.; Christian, John A.
2016-01-01
This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.
Effect of correlation on covariate selection in linear and nonlinear mixed effect models.
Bonate, Peter L
2017-01-01
The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.
2011-01-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models
Quantum mechanics vs. general covariance in gravity and string models
Martinec, E.J.
1984-01-01
Quantization of simple low-dimensional systems embodying general covariance is studied. Functional methods are employed in the calculation of effective actions for fermionic strings and 1 + 1 dimensional gravity. The author finds that regularization breaks apparent symmetries of the theory, providing new dynamics for the string and non-trivial dynamics for 1 + 1 gravity. The author moves on to consider the quantization of some generally covariant systems with a finite number of physical degrees of freedom, assuming the existence of an invariant cutoff. The author finds that the wavefunction of the universe in these cases is given by the solution to simple quantum mechanics problems
Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.
2016-01-01
This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator
Forecasting Co-Volatilities via Factor Models with Asymmetry and Long Memory in Realized Covariance
M. Asai (Manabu); M.J. McAleer (Michael)
2014-01-01
markdownabstract__Abstract__ Modelling covariance structures is known to suffer from the curse of dimensionality. In order to avoid this problem for forecasting, the authors propose a new factor multivariate stochastic volatility (fMSV) model for realized covariance measures that accommodates
Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.
Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei
2015-02-01
This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.
Dolan, C.V.; Molenaar, P.C.M.; Boomsma, D.I.
1991-01-01
D. Soerbom's (1974, 1976) simplex model approach to simultaneous analysis of means and covariance structure was applied to analysis of means observed in a single group. The present approach to the simultaneous biometric analysis of covariance and mean structure is based on the testable assumption
Robustness studies in covariance structure modeling - An overview and a meta-analysis
Hoogland, Jeffrey J.; Boomsma, A
In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the
A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling
Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang
2017-01-01
It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-07-01
Many spatial processes in environmental applications, such as climate variables and climate model errors on a global scale, exhibit complex nonstationary dependence structure, in not only their marginal covariance but also their cross-covariance. Flexible cross-covariance models for processes on a global scale are critical for an accurate description of each spatial process as well as the cross-dependences between them and also for improved predictions. We propose various ways to produce cross-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters vary over space, coupled with a differential operators approach for modeling large-scale nonstationarity. We compare their performance to the performance of some existing models in terms of the aic and spatial predictions in two applications: joint modeling of surface temperature and precipitation, and joint modeling of errors in climate model ensembles. © 2014 Elsevier Inc.
Nuamah, N.N.N.N.
1991-01-01
This paper postulates the assumptions underlying the Mean Approach model and recasts the re-expressions of the normal equations of this model in partitioned matrices of covariances. These covariance structures have been analysed. (author). 16 refs
Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka
2015-01-01
The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…
Yang, Yukay
I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...... to consider multivariate volatility modelling....
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2016-01-01
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
2014-01-01
We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
He, Peng; Eriksson, Frank; Scheike, Thomas H.
2016-01-01
function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...
Promotion time cure rate model with nonparametric form of covariate effects.
Chen, Tianlei; Du, Pang
2018-05-10
Survival data with a cured portion are commonly seen in clinical trials. Motivated from a biological interpretation of cancer metastasis, promotion time cure model is a popular alternative to the mixture cure rate model for analyzing such data. The existing promotion cure models all assume a restrictive parametric form of covariate effects, which can be incorrectly specified especially at the exploratory stage. In this paper, we propose a nonparametric approach to modeling the covariate effects under the framework of promotion time cure model. The covariate effect function is estimated by smoothing splines via the optimization of a penalized profile likelihood. Point-wise interval estimates are also derived from the Bayesian interpretation of the penalized profile likelihood. Asymptotic convergence rates are established for the proposed estimates. Simulations show excellent performance of the proposed nonparametric method, which is then applied to a melanoma study. Copyright © 2018 John Wiley & Sons, Ltd.
Gillet, N.; Jault, D.; Finlay, Chris
2013-01-01
Inferring the core dynamics responsible for the observed geomagnetic secular variation requires knowledge of the magnetic field at the core-mantle boundary together with its associated model covariances. However, most currently available field models have been built using regularization conditions...... variation error model in core flow inversions and geomagnetic data assimilation studies....
Gillet, Nicolas; Jault, D.; Finlay, Chris
2013-01-01
Inferring the core dynamics responsible for the observed geomagnetic secular variation requires knowledge of the magnetic field at the core mantle boundary together with its associated model covariances. However, all currently available field models have been built using regularization conditions...... variation error model in core flow inversions and geomagnetic data assimilation studies....
Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim
2017-01-01
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended
Simulations and cosmological inference: A statistical model for power spectra means and covariances
Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles
2008-01-01
We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
Univariate and Multivariate Specification Search Indices in Covariance Structure Modeling.
Hutchinson, Susan R.
1993-01-01
Simulated population data were used to compare relative performances of the modification index and C. Chou and P. M. Bentler's Lagrange multiplier test (a multivariate generalization of a modification index) for four levels of model misspecification. Both indices failed to recover the true model except at the lowest level of misspecification. (SLD)
Hattori, Masasi; Oaksford, Mike
2007-01-01
In this article, 41 models of covariation detection from 2 x 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in…
P2 : A random effects model with covariates for directed graphs
van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.
A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.
Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions
Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier
We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...
Pablo Gregori
2014-03-01
Full Text Available This paper represents a survey of recent advances in modeling of space or space-time Gaussian Random Fields (GRF, tools of Geostatistics at hand for the understanding of special cases of noise in image analysis. They can be used when stationarity or isotropy are unrealistic assumptions, or even when negative covariance between some couples of locations are evident. We show some strategies in order to escape from these restrictions, on the basis of rich classes of well known stationary or isotropic non negative covariance models, and through suitable operations, like linear combinations, generalized means, or with particular Fourier transforms.
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying
2014-07-15
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali
2014-01-01
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
Covariant introduction of quark spin into the dual resonance model
Iroshnikov, G.S.
1979-01-01
A very simple method of insertion of a quark spin into the dual resonance model of hadron interaction is proposed. The method is suitable for amplitudes with an arbitrary number of particles. The amplitude of interaction of real particles is presented as a product of contribution of oscillatory excitations in the (q anti q) system and of a spin factor. The latter is equal to the trace of the product of the external particle wave functions constructed from structural quarks and satisfying the relativistic Bargman-Wigner equations. Two examples of calculating the meson interaction amplitudes are presented
On the possibility on constructing covariant chromomagnetic field models
Cabo, A.; Penaranda, S.; Martinez, R.
1995-03-01
Expressions for SO(4) invariant euclidean QCD generating functionals are introduced which should produce non-vanishing gluon condensates. Their investigation is started here by initially considering the loop expansion of the corresponding effective action searching for a description differing from the usual perturbation theory. At this level, we consider special free propagators showing a sort or off-diagonal long range order. The calculation of the polarization tensor leads to a gluon mass term which is proportional to the squared root of the also finite value for 2 >. The summation of all the one-loop contributions to the energy having only mass insertions, indicates the spontaneous generation of the condensate from the perturbative grounds state in a way resembling the similar effect in the case of the chromomagnetic field models. This initial inspection suggests the need for a closer investigation which will be considered elsewhere. (author). 22 refs
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates
Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne
2013-01-01
The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implem...
Integrating lysimeter drainage and eddy covariance flux measurements in a groundwater recharge model
Vasquez, Vicente; Thomsen, Anton Gårde; Iversen, Bo Vangsø
2015-01-01
Field scale water balance is difficult to characterize because controls exerted by soils and vegetation are mostly inferred from local scale measurements with relatively small support volumes. Eddy covariance flux and lysimeters have been used to infer and evaluate field scale water balances...... because they have larger footprint areas than local soil moisture measurements.. This study quantifies heterogeneity of soil deep drainage (D) in four 12.5 m2 repacked lysimeters, compares evapotranspiration from eddy covariance (ETEC) and mass balance residuals of lysimeters (ETwbLys), and models D...
Meson form factors and covariant three-dimensional formulation of composite model
Skachkov, N.B.; Solovtsov, I.L.
1978-01-01
An approach is developed which is applied in the framework of the relativistic quark model to obtain explicit expressions for meson form factors in terms of covariant wave functions of the two-quark system. These wave functions obey the two-particle quasipotential equation in which the relative motion of quarks is singled out in a covariant way. The exact form of the wave functions is found using the transition to the relativistic configurational representation with the help of the harmonic analysis on the Lorentz group instead of the usual Fourier expansion and then solving the relativistic difference equation thus obtained. The expressions found for form factors are transformed into the three-dimensional covariant form which is a direct geometrical relativistic generalization of analogous expressions of the nonrelativistic quantum mechanics and provides the decrease of the meson form factor by the Fsub(π)(t) approximately t -1 law as -t infinity, in the Coulomb field
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
Gil, Einat; Gibbs, Alison L.
2017-01-01
In this study, we follow students' modeling and covariational reasoning in the context of learning about big data. A three-week unit was designed to allow 12th grade students in a mathematics course to explore big and mid-size data using concepts such as trend and scatter to describe the relationships between variables in multivariate settings.…
Meson form factors and covariant three-dimensional formulation of the composite model
Skachkov, N.B.; Solovtsov, I.L.
1979-01-01
An apparatus is developed which allows within the relativistic quark model, to find explicit expressions for meson form factors in terms of the wave functions of two-quark system that obey the covariant two-particle quasipotential equation. The exact form of wave functions is obtained by passing to the relativistic configurational representation. As an example, the quark Coulomb interaction is considered
Rebmann, C.; Göckede, M.; Foken, T.; Aubinet, M.; Aurela, M.; Berbigier, P.; Bernhofer, C.; Buchmann, N.; Carrara, A.; Cescatti, A.; Ceulemans, R.; Clement, R.; Elbers, J. A.; Granier, A.; Grünwald, T.; Guyon, D.; Havránková, Kateřina; Heinesch, B.; Knohl, A.; Laurila, T.; Longdoz, B.; Marcolla, B.; Markkanen, T.; Miglietta, F.; Moncrieff, J.; Montagnani, L.; Moors, E.; Nardino, M.; Ourcival, J.-M.; Rambal, S.; Rannik, Ü.; Rotenberg, E.; Sedlák, Pavel; Unterhuber, G.; Vesala, T.; Yakir, D.
2005-01-01
Roč. 80, - (2005), s. 121-141 ISSN 0177-798X Grant - others:Carboeuroflux(XE) EVK-2-CT-1999-00032 Institutional research plan: CEZ:AV0Z30420517; CEZ:AV0Z6087904 Keywords : Eddy covariance * Quality assurance * Quality control * Footprint modelling * Heterogeneity Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.295, year: 2005
Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne
2014-01-01
Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...
Lagged PM2.5 effects in mortality time series: Critical impact of covariate model
The two most common approaches to modeling the effects of air pollution on mortality are the Harvard and the Johns Hopkins (NMMAPS) approaches. These two approaches, which use different sets of covariates, result in dissimilar estimates of the effect of lagged fine particulate ma...
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Dagne, Getachew A; Huang, Yangxin
2013-09-30
Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.
Lei Qin
2014-05-01
Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.
2016-11-01
The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)
Covariant quantization of infinite spin particle models, and higher order gauge theories
Edgren, Ludde; Marnelius, Robert
2006-01-01
Further properties of a recently proposed higher order infinite spin particle model are derived. Infinitely many classically equivalent but different Hamiltonian formulations are shown to exist. This leads to a condition of uniqueness in the quantization process. A consistent covariant quantization is shown to exist. Also a recently proposed supersymmetric version for half-odd integer spins is quantized. A general algorithm to derive gauge invariances of higher order Lagrangians is given and applied to the infinite spin particle model, and to a new higher order model for a spinning particle which is proposed here, as well as to a previously given higher order rigid particle model. The latter two models are also covariantly quantized
Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model
Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.
2017-12-01
Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and
Robust entry guidance using linear covariance-based model predictive control
Jianjun Luo
2017-02-01
Full Text Available For atmospheric entry vehicles, guidance design can be accomplished by solving an optimal issue using optimal control theories. However, traditional design methods generally focus on the nominal performance and do not include considerations of the robustness in the design process. This paper proposes a linear covariance-based model predictive control method for robust entry guidance design. Firstly, linear covariance analysis is employed to directly incorporate the robustness into the guidance design. The closed-loop covariance with the feedback updated control command is initially formulated to provide the expected errors of the nominal state variables in the presence of uncertainties. Then, the closed-loop covariance is innovatively used as a component of the cost function to guarantee the robustness to reduce its sensitivity to uncertainties. After that, the models predictive control is used to solve the optimal problem, and the control commands (bank angles are calculated. Finally, a series of simulations for different missions have been completed to demonstrate the high performance in precision and the robustness with respect to initial perturbations as well as uncertainties in the entry process. The 3σ confidence region results in the presence of uncertainties which show that the robustness of the guidance has been improved, and the errors of the state variables are decreased by approximately 35%.
Quark model with chiral-symmetry breaking and confinement in the Covariant Spectator Theory
Biernat, Elmer P. [CFTP, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Pena, Maria Teresa [CFTP, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Departamento de FÃsica, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Ribiero, Jose' Emilio F. [CeFEMA, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Stadler, Alfred [Departamento de FÃsica, Universidade de Ãvora, 7000-671 Ãvora, Portugal; Gross, Franz L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2016-03-01
We propose a model for the quark-antiquark interaction in Minkowski space using the Covariant Spectator Theory. We show that with an equal-weighted scalar-pseudoscalar structure for the confining part of our interaction kernel the axial-vector Ward-Takahashi identity is preserved and our model complies with the Adler-zero constraint for pi-pi-scattering imposed by chiral symmetry.
Fu, Jianbin
2016-01-01
The multidimensional item response theory (MIRT) models with covariates proposed by Haberman and implemented in the "mirt" program provide a flexible way to analyze data based on item response theory. In this report, we discuss applications of the MIRT models with covariates to longitudinal test data to measure skill differences at the…
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
A. Budishchev
2014-09-01
Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel
2018-02-27
Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .
Piana, Chiara; Danhof, Meindert; Della Pasqua, Oscar
2014-01-01
Aims The accuracy of model-based predictions often reported in paediatric research has not been thoroughly characterized. The aim of this exercise is therefore to evaluate the role of covariate distributions when a pharmacokinetic model is used for simulation purposes. Methods Plasma concentrations of a hypothetical drug were simulated in a paediatric population using a pharmacokinetic model in which body weight was correlated with clearance and volume of distribution. Two subgroups of children were then selected from the overall population according to a typical study design, in which pre-specified body weight ranges (10–15 kg and 30–40 kg) were used as inclusion criteria. The simulated data sets were then analyzed using non-linear mixed effects modelling. Model performance was assessed by comparing the accuracy of AUC predictions obtained for each subgroup, based on the model derived from the overall population and by extrapolation of the model parameters across subgroups. Results Our findings show that systemic exposure as well as pharmacokinetic parameters cannot be accurately predicted from the pharmacokinetic model obtained from a population with a different covariate range from the one explored during model building. Predictions were accurate only when a model was used for prediction in a subgroup of the initial population. Conclusions In contrast to current practice, the use of pharmacokinetic modelling in children should be limited to interpolations within the range of values observed during model building. Furthermore, the covariate point estimate must be kept in the model even when predictions refer to a subset different from the original population. PMID:24433411
Covariance matrices for nuclear cross sections derived from nuclear model calculations
Smith, D. L.
2005-01-01
The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology
Covariance of random stock prices in the Stochastic Dividend Discount Model
Agosto, Arianna; Mainini, Alessandra; Moretto, Enrico
2016-01-01
Dividend discount models have been developed in a deterministic setting. Some authors (Hurley and Johnson, 1994 and 1998; Yao, 1997) have introduced randomness in terms of stochastic growth rates, delivering closed-form expressions for the expected value of stock prices. This paper extends such previous results by determining a formula for the covariance between random stock prices when the dividends' rates of growth are correlated. The formula is eventually applied to real market data.
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references
Kawano, Toshihiko; Shibata, Keiichi.
1997-09-01
A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of 238 U reaction cross sections were calculated with this system. (author)
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions
Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.
2011-12-01
Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Bayesian nonparametric generative models for causal inference with missing at random covariates.
Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J
2018-03-26
We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.
Yap, John Stephen; Fan, Jianqing; Wu, Rongling
2009-12-01
Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.
Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M
2018-05-07
A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.
Dreano, Denis
2017-04-05
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.
Relating covariant and canonical approaches to triangulated models of quantum gravity
Arnsdorf, Matthias
2002-01-01
In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory
Simulation of parametric model towards the fixed covariate of right censored lung cancer data
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila
2017-09-01
In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.
A versatile method for confirmatory evaluation of the effects of a covariate in multiple models
Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans
2012-01-01
to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...
On the fit of models to covariances and methodology to the Bulletin.
Bentler, P M
1992-11-01
It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.; Chen, Min; Maadooliat, Mehdi; Pourahmadi, Mohsen
2012-01-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes
Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia
2015-04-01
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.
Oda, Ryuichi; Ishida, Shin; Wada, Hiroaki; Yamada, Kenji; Sekiguchi, Motoo
1999-01-01
We examine mass spectra and wave functions of the nn-bar, cc-bar and bb-bar meson systems within the framework of the covariant oscillator quark model with the boosted LS-coupling scheme. We solve nonperturbatively an eigenvalue problem for the squared-mass operator, which incorporates the four-dimensional color-Coulomb-type interaction, by taking a set of covariant oscillator wave functions as an expansion basis. We obtain mass spectra of these meson systems, which reproduce quite well their experimental behavior. The resultant manifestly covariant wave functions, which are applicable to analyses of various reaction phenomena, are given. Our results seem to suggest that the present model may be considered effectively as a covariant version of the nonrelativistic linear-plus-Coulomb potential quark model. (author)
Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets
Zhang, Bohai
2014-01-01
Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.
Rare Λb→Λ l+l- and Λb→Λ γ decays in the relativistic quark model
Faustov, R. N.; Galkin, V. O.
2017-09-01
Rare Λb→Λ l+l- and Λb→Λ γ decays are investigated in the relativistic quark model based on the quark-diquark picture of baryons. The decay form factors are calculated accounting for all relativistic effects, including relativistic transformations of baryon wave functions from rest to a moving reference frame and the contribution of the intermediate negative-energy states. The momentum-transfer-squared dependence of the form factors is explicitly determined in the whole accessible kinematical range. The calculated decay branching fractions, various forward-backward asymmetries for the rare decay Λb→Λ μ+μ-, are found to be consistent with recent detailed measurements by the LHCb Collaboration. Predictions for the Λb→Λ τ+τ- decay observables are given.
Model-driven development of covariances for spatiotemporal environmental health assessment.
Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George
2013-01-01
Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.
Mahdi Shariati, Mohammad; Su, Guosheng; Madsen, Per
2007-01-01
The reaction norm model is becoming a popular approach to study genotype x environment interaction (GxE), especially when there is a continuum of environmental effects. These effects are typically unknown, and an approximation that is used in the literature is to replace them by the phenotypic...... means of each environment. It has been shown that this method results in poor inferences and that a more satisfactory alternative is to infer environmental effects jointly with the other parameters of the model. Such a reaction norm model with unknown covariates and heterogeneous residual variances...... across herds was fitted to milk, protein, and fat yield of first-lactation Danish Holstein cows to investigate the presence of GxE. Data included 188,502 first test-day records from 299 herds and 3,775 herd-years in a time period ranging from 1991 to 2003. Variance components and breeding values were...
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Kisil, Vladimir V.
2010-01-01
The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H_2, Banach spaces, covariant functional calculus and many others. Keywords: Wavelets, cohe...
Covariant field equations, gauge fields and conservation laws from Yang-Mills matrix models
Steinacker, Harold
2009-01-01
The effective geometry and the gravitational coupling of nonabelian gauge and scalar fields on generic NC branes in Yang-Mills matrix models is determined. Covariant field equations are derived from the basic matrix equations of motions, known as Yang-Mills algebra. Remarkably, the equations of motion for the Poisson structure and for the nonabelian gauge fields follow from a matrix Noether theorem, and are therefore protected from quantum corrections. This provides a transparent derivation and generalization of the effective action governing the SU(n) gauge fields obtained in [1], including the would-be topological term. In particular, the IKKT matrix model is capable of describing 4-dimensional NC space-times with a general effective metric. Metric deformations of flat Moyal-Weyl space are briefly discussed.
The covariance matrix of the Potts model: A random cluster analysis
Borgs, C.; Chayes, J.T.
1996-01-01
We consider the covariance matrix, G mn = q 2 x ,m); δ(σ y ,n)>, of the d-dimensional q-states Potts model, rewriting it in the random cluster representation of Fortuin and Kasteleyn. In many of the q ordered phases, we identify the eigenvalues of this matrix both in terms of representations of the unbroken symmetry group of the model and in terms of random cluster connectivities and covariances, thereby attributing algebraic significance to these stochastic geometric quantities. We also show that the correlation length and the correlation length corresponding to the decay rate of one on the eigenvalues in the same as the inverse decay rate of the diameter of finite clusters. For dimension of d=2, we show that this correlation length and the correlation length of two-point function with free boundary conditions at the corresponding dual temperature are equal up to a factor of two. For systems with first-order transitions, this relation helps to resolve certain inconsistencies between recent exact and numerical work on correlation lengths at the self-dual point β o . For systems with second order transitions, this relation implies the equality of the correlation length exponents from above below threshold, as well as an amplitude ratio of two. In the course of proving the above results, we establish several properties of independent interest, including left continuity of the inverse correlation length with free boundary conditions and upper semicontinuity of the decay rate for finite clusters in all dimensions, and left continuity of the two-dimensional free boundary condition percolation probability at β o . We also introduce DLR equations for the random cluster model and use them to establish ergodicity of the free measure. In order to prove these results, we introduce a new class of events which we call decoupling events and two inequalities for these events
A joint logistic regression and covariate-adjusted continuous-time Markov chain model.
Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue
2017-12-10
The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Covariant boost and structure functions of baryons in Gross-Neveu models
Brendel, Wieland; Thies, Michael
2010-01-01
Baryons in the large N limit of two-dimensional Gross-Neveu models are reconsidered. The time-dependent Dirac-Hartree-Fock approach is used to boost a baryon to any inertial frame and shown to yield the covariant energy-momentum relation. Momentum distributions are computed exactly in arbitrary frames and used to interpolate between the rest frame and the infinite momentum frame, where they are related to structure functions. Effects from the Dirac sea depend sensitively on the occupation fraction of the valence level and the bare fermion mass and do not vanish at infinite momentum. In the case of the kink baryon, they even lead to divergent quark and antiquark structure functions at x=0.
Sommer, Stefan Horst; Svane, Anne Marie
2017-01-01
distributions. We discuss a factorization of the frame bundle projection map through this bundle, the natural sub-Riemannian structure of the frame bundle, the effect of holonomy, and the existence of subbundles where the Hormander condition is satisfied such that the Brownian motions have smooth transition......We discuss the geometric foundation behind the use of stochastic processes in the frame bundle of a smooth manifold to build stochastic models with applications in statistical analysis of non-linear data. The transition densities for the projection to the manifold of Brownian motions developed...... in the frame bundle lead to a family of probability distributions on the manifold. We explain how data mean and covariance can be interpreted as points in the frame bundle or, more precisely, in the bundle of symmetric positive definite 2-tensors analogously to the parameters describing Euclidean normal...
Quarkonia and heavy-light mesons in a covariant quark model
Leitão Sofia
2016-01-01
Full Text Available Preliminary calculations using the Covariant Spectator Theory (CST employed a scalar linear confining interaction and an additional constant vector potential to compute the mesonic mass spectra. In this work we generalize the confining interaction to include more general structures, in particular a vector and also a pseudoscalar part, as suggested by a recent study [1]. A one-gluon-exchange kernel is also implemented to describe the short-range part of the interaction. We solve the simplest CST approximation to the complete Bethe-Salpeter equation, the one-channel spectator equation, using a numerical technique that eliminates all singularities from the kernel. The parameters of the model are determined through a fit to the experimental pseudoscalar meson spectra, with a good agreement for both quarkonia and heavy-light states.
Foroughi Pour, Ali; Dalton, Lori A
2018-03-21
Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.
Do gamblers eat more salt? Testing a latent trait model of covariance in consumption.
Goodwin, Belinda C; Browne, Matthew; Rockloff, Matthew; Donaldson, Phillip
2015-09-01
A diverse class of stimuli, including certain foods, substances, media, and economic behaviours, may be described as 'reward-oriented' in that they provide immediate reinforcement with little initial investment. Neurophysiological and personality concepts, including dopaminergic dysfunction, reward sensitivity and rash impulsivity, each predict the existence of a latent behavioural trait that leads to increased consumption of all stimuli in this class. Whilst bivariate relationships (co-morbidities) are often reported in the literature, to our knowledge, a multivariate investigation of this possible trait has not been done. We surveyed 1,194 participants (550 male) on their typical weekly consumption of 11 types of reward-oriented stimuli, including fast food, salt, caffeine, television, gambling products, and illicit drugs. Confirmatory factor analysis was used to compare models in a 3×3 structure, based on the definition of a single latent factor (none, fixed loadings, or estimated loadings), and assumed residual covariance structure (none, a-priori / literature based, or post-hoc / data-driven). The inclusion of a single latent behavioural 'consumption' factor significantly improved model fit in all cases. Also confirming theoretical predictions, estimated factor loadings on reward-oriented indicators were uniformly positive, regardless of assumptions regarding residual covariances. Additionally, the latent trait was found to be negatively correlated with the non-reward-oriented indicators of fruit and vegetable consumption. The findings support the notion of a single behavioural trait leading to increased consumption of reward-oriented stimuli across multiple modalities. We discuss implications regarding the concentration of negative lifestyle-related health behaviours.
Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2018-05-01
A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.
An integrative model of evolutionary covariance: a symposium on body shape in fishes.
Walker, Jeffrey A
2010-12-01
A major direction of current and future biological research is to understand how multiple, interacting functional systems coordinate in producing a body that works. This understanding is complicated by the fact that organisms need to work well in multiple environments, with both predictable and unpredictable environmental perturbations. Furthermore, organismal design reflects a history of past environments and not a plan for future environments. How complex, interacting functional systems evolve, then, is a truly grand challenge. In accepting the challenge, an integrative model of evolutionary covariance is developed. The model combines quantitative genetics, functional morphology/physiology, and functional ecology. The model is used to convene scientists ranging from geneticists, to physiologists, to ecologists, to engineers to facilitate the emergence of body shape in fishes as a model system for understanding how complex, interacting functional systems develop and evolve. Body shape of fish is a complex morphology that (1) results from many developmental paths and (2) functions in many different behaviors. Understanding the coordination and evolution of the many paths from genes to body shape, body shape to function, and function to a working fish body in a dynamic environment is now possible given new technologies from genetics to engineering and new theoretical models that integrate the different levels of biological organization (from genes to ecology).
Using Fit Indexes to Select a Covariance Model for Longitudinal Data
Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.
2012-01-01
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…
A comparative study of covariance selection models for the inference of gene regulatory networks.
Stifanelli, Patrizia F; Creanza, Teresa M; Anglani, Roberto; Liuzzi, Vania C; Mukherjee, Sayan; Schena, Francesco P; Ancona, Nicola
2013-10-01
The inference, or 'reverse-engineering', of gene regulatory networks from expression data and the description of the complex dependency structures among genes are open issues in modern molecular biology. In this paper we compared three regularized methods of covariance selection for the inference of gene regulatory networks, developed to circumvent the problems raising when the number of observations n is smaller than the number of genes p. The examined approaches provided three alternative estimates of the inverse covariance matrix: (a) the 'PINV' method is based on the Moore-Penrose pseudoinverse, (b) the 'RCM' method performs correlation between regression residuals and (c) 'ℓ(2C)' method maximizes a properly regularized log-likelihood function. Our extensive simulation studies showed that ℓ(2C) outperformed the other two methods having the most predictive partial correlation estimates and the highest values of sensitivity to infer conditional dependencies between genes even when a few number of observations was available. The application of this method for inferring gene networks of the isoprenoid biosynthesis pathways in Arabidopsis thaliana allowed to enlighten a negative partial correlation coefficient between the two hubs in the two isoprenoid pathways and, more importantly, provided an evidence of cross-talk between genes in the plastidial and the cytosolic pathways. When applied to gene expression data relative to a signature of HRAS oncogene in human cell cultures, the method revealed 9 genes (p-value<0.0005) directly interacting with HRAS, sharing the same Ras-responsive binding site for the transcription factor RREB1. This result suggests that the transcriptional activation of these genes is mediated by a common transcription factor downstream of Ras signaling. Software implementing the methods in the form of Matlab scripts are available at: http://users.ba.cnr.it/issia/iesina18/CovSelModelsCodes.zip. Copyright © 2013 The Authors. Published by
Quark (diquark) fragmentation in soft π-p interactions at P=40 GeV/c
Didenko, L.A.; Grishin, V.G.; Kuznetsov, V.A.
1984-01-01
The quark and diquark fragmentation into π +- -, K 0 -mesons and Λ-hyperons in soft π - p-interactions at 40 GeV/c is studied. Fragmentation Dsup(πsup(+-)) (Xsub(F)) and invariant Fsup(πsup(+-)) (Xsub(F)) functions are compared with analogous data on ν(anti ν)p - interactions. It is shown that a good agreement exists in the region Xsub(F) > or approximately 0.15 for these different processes. The Xsub(E)-dependence of the quark and diquark fragmentation function for neutral kaons is similar to that in e + e - annihilation. The pickup probability of strange s(anti s) quark (lambda sub(s)) and diquark (lambda sub(qq)) relative to u(anti u) and d(anti d) quarks from the sea has been found to be equal to lambda sub(s)=0.17 and lambda sub(qq)=0.14+-0.03
Adlaf, E M; Kohn, P M
1989-07-01
Re-analysis employing covariance-structural models was conducted on Strickland's (1983) survey data on 772 drinking students from Grades 7, 9 and 11. These data bear on the relations among alcohol consumption, alcohol abuse, association with drinking peers and exposure to televised alcohol advertising. Whereas Strickland used a just-identified model which, therefore, could not be tested for goodness of fit, our re-analysis tested several alternative models, which could be contradicted by the data. One model did fit his data particularly well. Its major implications are as follows: (1) Symptomatic consumption, negative consequences and self-rated severity of alcohol-related problems apparently reflect a common underlying factor, namely alcohol abuse. (2) Use of alcohol to relieve distress and frequency of intoxication, however, appear not to reflect abuse, although frequent intoxication contributes substantially to it. (3). Alcohol advertising affects consumption directly and abuse indirectly, although peer association has far greater impact on both consumption and abuse. These findings are interpreted as lending little support to further restrictions on advertising.
Bianchi, Eugenio; De Lorenzo, Tommaso; Smerlak, Matteo
2015-06-01
We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole "exterior entropy" and "radiation entropy." For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the "black hole fireworks" model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that ( i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, ( ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the "purifying" phase, ( iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.
Bianchi, Eugenio; Lorenzo, Tommaso De; Smerlak, Matteo
2015-01-01
We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole “exterior entropy” and “radiation entropy.” For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the “black hole fireworks” model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that (i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, (ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the “purifying” phase, (iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.
Ranjeet John; Jiquan Chen; Asko Noormets; Xiangming Xiao; Jianye Xu; Nan Lu; Shiping Chen
2013-01-01
We evaluate the modelling of carbon fluxes from eddy covariance (EC) tower observations in different water-limited land-cover/land-use (LCLU) and biome types in semi-arid Inner Mongolia, China. The vegetation photosynthesis model (VPM) and modified VPM (MVPM), driven by the enhanced vegetation index (EVI) and land-surface water index (LSWI), which were derived from the...
Bun, M.; de Haan, M.
2010-01-01
We analyze the usefulness of the first stage F-statistic for detecting weak instruments in the IV model with a nonscalar error covariance structure. More in particular, we question the validity of the rule of thumb of a first stage F-statistic of 10 or higher for models with correlated errors
Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes
2017-01-01
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...
4He(γ,dd and 3He(γ,pd reactions in nonlocal covariant model
Kasatkin Yu. A.
2014-03-01
Full Text Available Photonuclear reaction research is of great interest to obtain information about the structure of nuclei. The investigation of structural effects requires certain insights into the reaction mechanisms, that have to be identified on the basis of the fundamental principles of covariance and gauge invariance. The major achievement of the chosen model is the ability to reproduce the cross-section dependence using the minimal necessary set of parameters. We analyze the two-particle disintegration of 3He nuclei by photons. Our interest was raised by the fact that 3He is the simplest many-particle system which admits an exact solutions. We also consider the process 4He(γ, dd. This process comes at the expense of the quadrupole absorption of γ-rays, while the dipole transition is suppressed. This property is a consequence of the isospin selection as well as the identity of the particles in the final state. Obtained results describe the energy range from threshold (20 MeV to 140 MeV. Therefore, the model mentioned in the paper has the peculiarity to be valid not only for the low-energy regime, but also for higher energies. Present paper is devoted to determine the roles of different reaction mechanisms and to solve problems above.
Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance
J. G. Barr
2013-03-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
James M. Cheverud
2007-03-01
Full Text Available Comparisons of covariance patterns are becoming more common as interest in the evolution of relationships between traits and in the evolutionary phenotypic diversification of clades have grown. We present parallel analyses of covariance matrix similarity for cranial traits in 14 New World Monkey genera using the Random Skewers (RS, T-statistics, and Common Principal Components (CPC approaches. We find that the CPC approach is very powerful in that with adequate sample sizes, it can be used to detect significant differences in matrix structure, even between matrices that are virtually identical in their evolutionary properties, as indicated by the RS results. We suggest that in many instances the assumption that population covariance matrices are identical be rejected out of hand. The more interesting and relevant question is, How similar are two covariance matrices with respect to their predicted evolutionary responses? This issue is addressed by the random skewers method described here.
Kinnebrock, Silja; Podolskij, Mark
This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis...... process can be relaxed and how our method can be applied to non-synchronous observations. We also present an empirical study of how high-frequency correlations, regressions and covariances change through time....
A 3-D Riesz-Covariance Texture Model for Prediction of Nodule Recurrence in Lung CT
Cirujeda Pol; Dicente Cid Yashin; Müller Henning; Rubin Daniel L.; Aguilera Todd A.; Jr. Billy W. Loo; Diehn Maximilian; Binefa Xavier; Depeursinge Adrien
2016-01-01
This paper proposes a novel imaging biomarker of lung cancer relapse from 3 D texture analysis of CT images. Three dimensional morphological nodular tissue properties are described in terms of 3 D Riesz wavelets. The responses of the latter are aggregated within nodular regions by means of feature covariances which leverage rich intra and inter variations of the feature space dimensions. When compared to the classical use of the average for feature aggregation feature covariances preserve sp...
Non-stationary covariance function modelling in 2D least-squares collocation
Darbeheshti, N.; Featherstone, W. E.
2009-06-01
Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.
Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.
2017-01-01
Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels
A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.
Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua
2017-07-01
Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.
Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J
2014-07-01
We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.
Zhang, Y.; Novick, K. A.; Song, C.; Zhang, Q.; Hwang, T.
2017-12-01
Drought and heat waves are expected to increase both in frequency and amplitude, exhibiting a major disturbance to global carbon and water cycles under future climate change. However, how these climate anomalies translate into physiological drought, or ecosystem moisture stress are still not clear, especially under the co-limitations from soil moisture supply and atmospheric demand for water. In this study, we characterized the ecosystem-level moisture stress in a deciduous forest in the southeastern United States using the Coupled Carbon and Water (CCW) model and in-situ eddy covariance measurements. Physiologically, vapor pressure deficit (VPD) as an atmospheric water demand indicator largely controls the openness of leaf stomata, and regulates atmospheric carbon and water exchanges during periods of hydrological stress. Here, we tested three forms of VPD-related moisture scalars, i.e. exponent (K2), hyperbola (K3), and logarithm (K4) to quantify the sensitivity of light-use efficiency to VPD along different soil moisture conditions. The sensitivity indicators of K values were calibrated based on the framework of CCW using Monte Carlo simulations on the hourly scale, in which VPD and soil water content (SWC) are largely decoupled and the full carbon and water exchanging information are held. We found that three K values show similar performances in the predictions of ecosystem-level photosynthesis and transpiration after calibration. However, all K values show consistent gradient changes along SWC, indicating that this deciduous forest is less responsive to VPD as soil moisture decreases, a phenomena of isohydricity in which plants tend to close stomata to keep the leaf water potential constant and reduce the risk of hydraulic failure. Our study suggests that accounting for such isohydric information, or spectrum of moisture stress along different soil moisture conditions in models can significantly improve our ability to predict ecosystem responses to future
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Bachoc, F.
2013-01-01
The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr
Threat Object Detection using Covariance Matrix Modeling in X-ray Images
Jeon, Byoun Gil; Kim, Jong Yul; Moon, Myung Kook
2016-01-01
The X-ray imaging system for the aviation security is one of the applications. In airports, all passengers and properties should be inspected and accepted by security machines before boarding on aircrafts to avoid all treat factors. That treat factors might be directly connected on terrorist threats awfully hazardous to not only passengers but also people in highly populated area such as major cities or buildings. Because the performance of the system is increasing along with the growth of IT technology, information that has various type and good quality can be provided for security check. However, human factors are mainly affected on the inspections. It means that human inspectors should be proficient corresponding to the growth of technology for efficient and effective inspection but there is clear limit of proficiency. Human being is not a computer. Because of the limitation, the aviation security techniques have the tendencies to provide not only numerous and nice information but also effective assistance for security inspectors. Many image processing applications already have been developed to provide efficient assistance for the security systems. Naturally, the security check procedure should not be altered by automatic software because it's not guaranteed that the automatic system will never make any mistake. This paper addressed an application of threat object detection using the covariance matrix modeling. The algorithm is implemented in MATLAB environment and evaluated the performance by comparing with other detection algorithms. Considering the shape of an object on an image is changed by the attitude of that to the imaging machine, the implemented detector has the robustness for rotation and scale of an object
Threat Object Detection using Covariance Matrix Modeling in X-ray Images
Jeon, Byoun Gil; Kim, Jong Yul; Moon, Myung Kook [KAERI, Daejeon (Korea, Republic of)
2016-05-15
The X-ray imaging system for the aviation security is one of the applications. In airports, all passengers and properties should be inspected and accepted by security machines before boarding on aircrafts to avoid all treat factors. That treat factors might be directly connected on terrorist threats awfully hazardous to not only passengers but also people in highly populated area such as major cities or buildings. Because the performance of the system is increasing along with the growth of IT technology, information that has various type and good quality can be provided for security check. However, human factors are mainly affected on the inspections. It means that human inspectors should be proficient corresponding to the growth of technology for efficient and effective inspection but there is clear limit of proficiency. Human being is not a computer. Because of the limitation, the aviation security techniques have the tendencies to provide not only numerous and nice information but also effective assistance for security inspectors. Many image processing applications already have been developed to provide efficient assistance for the security systems. Naturally, the security check procedure should not be altered by automatic software because it's not guaranteed that the automatic system will never make any mistake. This paper addressed an application of threat object detection using the covariance matrix modeling. The algorithm is implemented in MATLAB environment and evaluated the performance by comparing with other detection algorithms. Considering the shape of an object on an image is changed by the attitude of that to the imaging machine, the implemented detector has the robustness for rotation and scale of an object.
Delta and Omega electromagnetic form factors in a Dyson-Schwinger/Bethe-Salpeter approach
Diana Nicmorus, Gernot Eichmann, Reinhard Alkofer
2010-12-01
We investigate the electromagnetic form factors of the Delta and the Omega baryons within the Poincare-covariant framework of Dyson-Schwinger and Bethe-Salpeter equations. The three-quark core contributions of the form factors are evaluated by employing a quark-diquark approximation. We use a consistent setup for the quark-gluon dressing, the quark-quark bound-state kernel and the quark-photon interaction. Our predictions for the multipole form factors are compatible with available experimental data and quark-model estimates. The current-quark mass evolution of the static electromagnetic properties agrees with results provided by lattice calculations.
Hoyle, R H
1991-02-01
Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.
Longitudinal momentum distributions in transverse coordinate space
Kumar, Narinder; Mondal, Chandan
2016-01-01
In the present work, we study the longitudinal momentum distributions in the transverse coordinate space in a light-front quark-diquark model inspired by soft-wall AdS/QCD. We take the phenomenological light-front quark-diquark model proposed by Gutsche et. al. In this model, the light-front wave functions (LFWFs) for the proton are constructed from the two particle wave functions obtained in soft-wall AdS/QCD
Three Cs in Measurement Models: Causal Indicators, Composite Indicators, and Covariates
Bollen, Kenneth A.; Bauldry, Shawn
2011-01-01
In the last two decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that we can classify indicators into two categories, effect (reflective) indicators and causal (formative) indicators. This paper argues that the dichotomous view is too simple. Instead, there are effect indicators and three types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the “three Cs”). Caus...
Semiparametric approach for non-monotone missing covariates in a parametric regression model
Sinha, Samiran
2014-02-26
Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu
2017-01-01
This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Kim, S
2008-03-01
Full Text Available used in Table 4 are as follow — βk: direct effect; βTk : total effect; and βsbk : superbeta. There are some interesting findings from the results presented in Table 4. For out- come variable Customer satisfaction, the superbeta measure was strongest... corresponding 95% HPD interval contains 0. This suggests that ignoring the heterogeneity and/or covariates gives different conclusions based on the total-effect measure. Also from Table 4, we see that for outcome variable Customer satisfaction, all the 3...
Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.
Gilson, Matthieu
2018-04-01
Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.
Evaluation of Global Photosynthesis and BVOC Emission Covariance with Climate in NASA ModelE2-Y
Unger, N.
2012-12-01
-dependent fluxes across a broad range of different ecosystem types. In tropical ecosystems, the model simulates the campaign-average diurnal cycle with remarkable fidelity (root-mean-square error = 0.20 mgC/m2/hr; normalized mean bias = -5%). The model underpredicts in broadleaf deciduous ecosystems in the United States and Europe. We probe the GPP and BVOC emission covariance with climate in tropical, temperate and boreal ecosystems, and the GPP-HCHO correlation using fire-free HCHO columns from OMI and SCIAMACHY 2005-2008.
Covariation in Natural Causal Induction.
Cheng, Patricia W.; Novick, Laura R.
1991-01-01
Biases and models usually offered by cognitive and social psychology and by philosophy to explain causal induction are evaluated with respect to focal sets (contextually determined sets of events over which covariation is computed). A probabilistic contrast model is proposed as underlying covariation computation in natural causal induction. (SLD)
Ong, M L; Ng, E Y K
2005-12-01
In the lower brain, body temperature is continually being regulated almost flawlessly despite huge fluctuations in ambient and physiological conditions that constantly threaten the well-being of the body. The underlying control problem defining thermal homeostasis is one of great enormity: Many systems and sub-systems are involved in temperature regulation and physiological processes are intrinsically complex and intertwined. Thus the defining control system has to take into account the complications of nonlinearities, system uncertainties, delayed feedback loops as well as internal and external disturbances. In this paper, we propose a self-tuning adaptive thermal controller based upon Hebbian feedback covariance learning where the system is to be regulated continually to best suit its environment. This hypothesis is supported in part by postulations of the presence of adaptive optimization behavior in biological systems of certain organisms which face limited resources vital for survival. We demonstrate the use of Hebbian feedback covariance learning as a possible self-adaptive controller in body temperature regulation. The model postulates an important role of Hebbian covariance adaptation as a means of reinforcement learning in the thermal controller. The passive system is based on a simplified 2-node core and shell representation of the body, where global responses are captured. Model predictions are consistent with observed thermoregulatory responses to conditions of exercise and rest, and heat and cold stress. An important implication of the model is that optimal physiological behaviors arising from self-tuning adaptive regulation in the thermal controller may be responsible for the departure from homeostasis in abnormal states, e.g., fever. This was previously unexplained using the conventional "set-point" control theory.
Dreano, Denis
2015-04-27
A statistical model is proposed to filter satellite-derived chlorophyll concentration from the Red Sea, and to predict future chlorophyll concentrations. The seasonal trend is first estimated after filling missing chlorophyll data using an Empirical Orthogonal Function (EOF)-based algorithm (Data Interpolation EOF). The anomalies are then modeled as a stationary Gaussian process. A method proposed by Gneiting (2002) is used to construct positive-definite space-time covariance models for this process. After choosing an appropriate statistical model and identifying its parameters, Kriging is applied in the space-time domain to make a one step ahead prediction of the anomalies. The latter serves as the prediction model of a reduced-order Kalman filter, which is applied to assimilate and predict future chlorophyll concentrations. The proposed method decreases the root mean square (RMS) prediction error by about 11% compared with the seasonal average.
Dreano, Denis; Mallick, Bani; Hoteit, Ibrahim
2015-01-01
A statistical model is proposed to filter satellite-derived chlorophyll concentration from the Red Sea, and to predict future chlorophyll concentrations. The seasonal trend is first estimated after filling missing chlorophyll data using an Empirical Orthogonal Function (EOF)-based algorithm (Data Interpolation EOF). The anomalies are then modeled as a stationary Gaussian process. A method proposed by Gneiting (2002) is used to construct positive-definite space-time covariance models for this process. After choosing an appropriate statistical model and identifying its parameters, Kriging is applied in the space-time domain to make a one step ahead prediction of the anomalies. The latter serves as the prediction model of a reduced-order Kalman filter, which is applied to assimilate and predict future chlorophyll concentrations. The proposed method decreases the root mean square (RMS) prediction error by about 11% compared with the seasonal average.
Hounyo, Ulrich
to a gneral class of estimators of integrated covolatility. We then show the first-order asymptotic validity of this method in the multivariate context with a potential presence of jumps, dependent microsturcture noise, irregularly spaced and non-synchronous data. Due to our focus on non...... covariance estimator. As an application of our results, we also consider the bootstrap for regression coefficients. We show that the wild blocks of bootstrap, appropriately centered, is able to mimic both the dependence and heterogeneity of the scores, thus justifying the construction of bootstrap percentile...... intervals as well as variance estimates in this context. This contrasts with the traditional pairs bootstrap which is not able to mimic the score heterogeneity even in the simple case where no microsturcture noise is present. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves...
Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas
2017-12-01
We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.
Kirschbaum, Miko U F; Rutledge, Susanna; Kuijper, Isoude A; Mudge, Paul L; Puche, Nicolas; Wall, Aaron M; Roach, Chris G; Schipper, Louis A; Campbell, David I
2015-04-15
We used two years of eddy covariance (EC) measurements collected over an intensively grazed dairy pasture to better understand the key drivers of changes in soil organic carbon stocks. Analysing grazing systems with EC measurements poses significant challenges as the respiration from grazing animals can result in large short-term CO2 fluxes. As paddocks are grazed only periodically, EC observations derive from a mosaic of paddocks with very different exchange rates. This violates the assumptions implicit in the use of EC methodology. To test whether these challenges could be overcome, and to develop a tool for wider scenario testing, we compared EC measurements with simulation runs with the detailed ecosystem model CenW 4.1. Simulations were run separately for 26 paddocks around the EC tower and coupled to a footprint analysis to estimate net fluxes at the EC tower. Overall, we obtained good agreement between modelled and measured fluxes, especially for the comparison of evapotranspiration rates, with model efficiency of 0.96 for weekly averaged values of the validation data. For net ecosystem productivity (NEP) comparisons, observations were omitted when cattle grazed the paddocks immediately around the tower. With those points omitted, model efficiencies for weekly averaged values of the validation data were 0.78, 0.67 and 0.54 for daytime, night-time and 24-hour NEP, respectively. While not included for model parameterisation, simulated gross primary production also agreed closely with values inferred from eddy covariance measurements (model efficiency of 0.84 for weekly averages). The study confirmed that CenW simulations could adequately model carbon and water exchange in grazed pastures. It highlighted the critical role of animal respiration for net CO2 fluxes, and showed that EC studies of grazed pastures need to consider the best approach of accounting for this important flux to avoid unbalanced accounting. Copyright © 2015. Published by Elsevier B.V.
Henrot, Alexandra-Jane; François, Louis; Dury, Marie; Hambuckers, Alain; Jacquemin, Ingrid; Minet, Julien; Tychon, Bernard; Heinesch, Bernard; Horemans, Joanna; Deckmyn, Gaby
2015-04-01
Eddy covariance measurements are an essential resource to understand how ecosystem carbon fluxes react in response to climate change, and to help to evaluate and validate the performance of land surface and vegetation models at regional and global scale. In the framework of the MASC project (« Modelling and Assessing Surface Change impacts on Belgian and Western European climate »), vegetation dynamics and carbon fluxes of forest and grassland ecosystems simulated by the CARAIB dynamic vegetation model (Dury et al., iForest - Biogeosciences and Forestry, 4:82-99, 2011) are evaluated and validated by comparison of the model predictions with eddy covariance data. Here carbon fluxes (e.g. net ecosystem exchange (NEE), gross primary productivity (GPP), and ecosystem respiration (RECO)) and evapotranspiration (ET) simulated with the CARAIB model are compared with the fluxes measured at several eddy covariance flux tower sites in Belgium and Western Europe, chosen from the FLUXNET global network (http://fluxnet.ornl.gov/). CARAIB is forced either with surface atmospheric variables derived from the global CRU climatology, or with in situ meteorological data. Several tree (e.g. Pinus sylvestris, Fagus sylvatica, Picea abies) and grass species (e.g. Poaceae, Asteraceae) are simulated, depending on the species encountered on the studied sites. The aim of our work is to assess the model ability to reproduce the daily, seasonal and interannual variablility of carbon fluxes and the carbon dynamics of forest and grassland ecosystems in Belgium and Western Europe.
Gong, Maozhen
Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Forecasting Multivariate Volatility using the VARFIMA Model on Realized Covariance Cholesky Factors
Halbleib, Roxana; Voev, Valeri
2011-01-01
This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. Bymodelling the Cholesky factors of the covariancematrices, the model generates......, regardless of the type of utility function or return distribution, would be better-off from using this model than from using some standard approaches....
Neiman, Tal; Loewenstein, Yonatan
2013-01-23
In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.
Multilevel Regression Models for Mean and (Co)variance: with Applications in Nursing Research
Li, Bayoue
2014-01-01
markdownabstract__Abstract__ In this chapter, a concise overview is provided for the statistical techniques that are applied in this thesis. This includes two classes of statistical modeling approaches which have been commonly applied in plenty of research areas for many decades. Namely, we will describe the fundamental ideas about mixed effects models and factor analytic (FA) models. To be specific, this chapter covers several types of these two classes of modeling approaches. For the mixed ...
Branching fractions of semileptonic D and D{sub s} decays from the covariant light-front quark model
Cheng, Hai-Yang; Kang, Xian-Wei [Academia Sinica, Institute of Physics, Taipei (China)
2017-09-15
Based on the predictions of the relevant form factors from the covariant light-front quark model, we show the branching fractions for the D(D{sub s}) → (P, S, V, A) lν{sub l} (l = e or μ) decays, where P denotes the pseudoscalar meson, S the scalar meson with a mass above 1 GeV, V the vector meson and A the axial-vector one. Comparison with the available experimental results are made, and we find an excellent agreement. The predictions for other decay modes can be tested in a charm factory, e.g., the BESIII detector. The future measurements will definitely further enrich our knowledge of the hadronic transition form factors as well as the inner structure of the even-parity mesons (S and A). (orig.)
Ichida, J M; Wassell, J T; Keller, M D; Ayers, L W
1993-02-01
Survival analysis methods are valuable for detecting intervention effects because detailed information from patient records and sensitive outcome measures are used. The burn unit at a large university hospital replaced routine bathing with total body bathing using chlorhexidine gluconate for antimicrobial effect. A Cox proportional hazards model was used to analyse time from admission until either infection with Staphylococcus aureus or discharge for 155 patients, controlling for burn severity and two time-dependent covariates: days until first wound excision and days until first administration of prophylactic antibiotics. The risk of infection was 55 per cent higher in the historical control group, although not statistically significant. There was also some indication that early wound excision may be important as an infection-control measure for burn patients.
Bias Correction in the Dynamic Panel Data Model with a Nonscalar Disturbance Covariance Matrix
Bun, M.J.G.
2003-01-01
Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J.
Uncertainty in eddy covariance measurements and its application to physiological models
D.Y. Hollinger; A.D. Richardson; A.D. Richardson
2005-01-01
Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Bias corrrection in the dynamic panel data model with a nonscalar disturbance covariance matrix
Bun, M.J.G.
2001-01-01
Approximation formulae are developed for the bias of ordinary andgeneralized Least Squares Dummy Variable (LSDV) estimators in dynamicpanel data models. Results from Kiviet (1995, 1999) are extended tohigher-order dynamic panel data models with general covariancestructure. The focus is on estimation
Cheung, Mike W. L.; Chan, Wai
2009-01-01
Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…
A cautionary note on the use of information fit indexes in covariance structure modeling with means
Wicherts, J.M.; Dolan, C.V.
2004-01-01
Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases
Covariant two-particle wave functions for model quasipotentials admitting exact solutions
Kapshaj, V.N.; Skachkov, N.B.
1983-01-01
Two formulations of quasipotential equations in the relativistic configurational representation are considered for the wave function of the internal motion of the bound system of two relativistic particles. Exact solutions of these equations are found for some model quasipotentials
Covariant two-particle wave functions for model quasipotential allowing exact solutions
Kapshaj, V.N.; Skachkov, N.B.
1982-01-01
Two formulations of quasipotential equations in the relativistic configurational representation are considered for the wave function of relative motion of a bound state of two relativistic particles. Exact solutions of these equations are found for some model quasipotentials
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
Temperature Covariance in Tree Ring Reconstructions and Model Simulations Over the Past Millennium
Hartl-Meier, C. T. M.; Büntgen, Ulf; Smerdon, J. E.; Zorita, E.; Krusic, P. J.; Ljungqvist, F. C.; Schneider, L.; Esper, J.
2017-01-01
Roč. 44, č. 18 (2017), s. 9458-9469 ISSN 0094-8276 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:68378076 Keywords : last millennium * northern-hemisphere * summer temperatures * american southwest * volcanic-eruptions * tibetan plateau * sierra-nevada * system model * central-asia * climate * paleoclimate * spatial temperature synchrony * millennial scale * radiative forcing * proxy model comparison Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7) Impact factor: 4.253, year: 2016
Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets
Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.
2014-01-01
of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov
Groenendijk, M.; Dolman, A.J.; Ammann, C.; Arneth, A.; Cescatti, A.; Molen, van der M.K.; Moors, E.J.
2011-01-01
Global vegetation models require the photosynthetic parameters, maximum carboxylation capacity (Vcm), and quantum yield (a) to parameterize their plant functional types (PFTs). The purpose of this work is to determine how much the scaling of the parameters from leaf to ecosystem level through a
Multilevel Regression Models for Mean and (Co)variance: with Applications in Nursing Research
B. Li (Bayoue)
2014-01-01
markdownabstract__Abstract__ In this chapter, a concise overview is provided for the statistical techniques that are applied in this thesis. This includes two classes of statistical modeling approaches which have been commonly applied in plenty of research areas for many decades. Namely, we
Dielman, T. E.; And Others
1989-01-01
Questionnaires were administered to 4,157 junior high school students to determine levels of alcohol misuse, exposure to peer use and misuse of alcohol, susceptibility to peer pressure, internal health locus of control, and self-esteem. Conceptual model of antecendents of adolescent alcohol misuse and effectiveness of a prevention effort was…
Pieter-Jan Vlok
2012-01-01
Full Text Available
ENGLISH ABSTRACT: Increased competitiveness in the production world necessitates improved maintenance strategies to increase availabilities and drive down cost . The maintenance engineer is thus faced with the need to make more intelligent pre ventive renewal decisions . Two of the main techniques to achieve this is through Condition Monitoring (such as vibrat ion monitoring and oil anal ysis and Statistical Failure Analysis (typically using probabilistic techniques . The present paper discusses these techniques, their uses and weaknesses and then presents th e Proportional Hazard Model as an solution to most of these weaknesses. It then goes on to compare the results of the different techniques in monetary terms, using a South African case study. This comparison shows clearly that the Proportional Hazards Model is sup erior to the present t echniques and should be the preferred model for many actual maintenance situations.
AFRIKAANSE OPSOMMING: Verhoogde vlakke van mededinging in die produksie omgewing noodsaak verbeterde instandhouding strategies om beskikbaarheid van toerusting te verhoog en koste te minimeer. Instandhoudingsingenieurs moet gevolglik meer intellegente voorkomende hernuwings besluite neem. Twee prominente tegnieke om hierdie doelwit te bereik is Toestandsmonitering (soos vibrasie monitering of olie analise en Statistiese Falingsanalise (gewoonlik m.b.v. probabilistiese metodes. In hierdie artikel beskou ons beide hierdie tegnieke, hulle gebruike en tekortkominge en stel dan die Proporsionele Gevaarkoers Model voor as 'n oplossing vir meeste van die tekortkominge. Die artikel vergelyk ook die verskillende tegnieke in geldelike terme deur gebruik te maak van 'n Suid-Afrikaanse gevalle studie. Hierdie vergelyking wys duidelik-uit dat die Proporsionele Gevaarkoers Model groter beloft e inhou as die huidige tegni eke en dat dit die voorkeur oplossing behoort te wees in baie werklike instandhoudings situasies.
Facchi, Arianna; Masseroni, Daniele; Gharsallah, Olfa; Gandolfi, Claudio
2014-05-01
Rice is of great importance both from a food supply point of view, since it represents the main food in the diet of over half the world's population, and from a water resources point of view, since it consumes almost 40% of the water amount used for irrigation. About 90% of global production takes place in Asia, while European production is quantitatively modest (about 3 million tons). However, Italy is the Europe's leading producer, with over half of total production, almost totally concentrated in a large traditional paddy rice area between the Lombardy and Piedmont regions, in the north-western part of the country. In this area, irrigation of rice is traditionally carried out by continuous flooding. The high water requirement of this irrigation regime encourages the introduction of water saving irrigation practices, as flood irrigation after sowing in dry soil and intermittent irrigation (aerobic rice). In the agricultural season 2013 an intense monitoring activity was conducted on three experimental fields located in the Padana plain (northern Italy) and characterized by different irrigation regimes (traditional flood irrigation, flood irrigation after sowing in dry soil, intermittent irrigation), with the aim of comparing the water balance terms for the three irrigation treatments. Actual evapotranspiration (ET) is one of the terms, but, unlike others water balance components, its field monitoring requires expensive instrumentation. This work explores the possibility of using only one eddy covariance system and Penman-Monteith (PM) type models for the determination of ET fluxes for the three irrigation regimes. An eddy covariance station was installed on the levee between the traditional flooded and the aerobic rice fields, to contemporaneously monitor the ET fluxes from this two treatments as a function of the wind direction. A detailed footprint analysis was conducted - through the application of three different analytical models - to determine the position
Äijö, Tarmo; Yue, Xiaojing; Rao, Anjana; Lähdesmäki, Harri
2016-09-01
5-methylcytosine (5mC) is a widely studied epigenetic modification of DNA. The ten-eleven translocation (TET) dioxygenases oxidize 5mC into oxidized methylcytosines (oxi-mCs): 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC). DNA methylation modifications have multiple functions. For example, 5mC is shown to be associated with diseases and oxi-mC species are reported to have a role in active DNA demethylation through 5mC oxidation and DNA repair, among others, but the detailed mechanisms are poorly understood. Bisulphite sequencing and its various derivatives can be used to gain information about all methylation modifications at single nucleotide resolution. Analysis of bisulphite based sequencing data is complicated due to the convoluted read-outs and experiment-specific variation in biochemistry. Moreover, statistical analysis is often complicated by various confounding effects. How to analyse 5mC and oxi-mC data sets with arbitrary and complex experimental designs is an open and important problem. We propose the first method to quantify oxi-mC species with arbitrary covariate structures from bisulphite based sequencing data. Our probabilistic modeling framework combines a previously proposed hierarchical generative model for oxi-mC-seq data and a general linear model component to account for confounding effects. We show that our method provides accurate methylation level estimates and accurate detection of differential methylation when compared with existing methods. Analysis of novel and published data gave insights into to the demethylation of the forkhead box P3 (Foxp3) locus during the induced T regulatory cell differentiation. We also demonstrate how our covariate model accurately predicts methylation levels of the Foxp3 locus. Collectively, LuxGLM method improves the analysis of DNA methylation modifications, particularly for oxi-mC species. An implementation of the proposed method is available under MIT license at https
Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.
2014-06-01
The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)
Poincare covariance and κ-Minkowski spacetime
Dabrowski, Ludwik; Piacitelli, Gherardo
2011-01-01
A fully Poincare covariant model is constructed as an extension of the κ-Minkowski spacetime. Covariance is implemented by a unitary representation of the Poincare group, and thus complies with the original Wigner approach to quantum symmetries. This provides yet another example (besides the DFR model), where Poincare covariance is realised a la Wigner in the presence of two characteristic dimensionful parameters: the light speed and the Planck length. In other words, a Doubly Special Relativity (DSR) framework may well be realised without deforming the meaning of 'Poincare covariance'. -- Highlights: → We construct a 4d model of noncommuting coordinates (quantum spacetime). → The coordinates are fully covariant under the undeformed Poincare group. → Covariance a la Wigner holds in presence of two dimensionful parameters. → Hence we are not forced to deform covariance (e.g. as quantum groups). → The underlying κ-Minkowski model is unphysical; covariantisation does not cure this.
Sourrouille, Lucas; Casana, Rodolfo
2016-01-01
We have studied the existence of self-dual solitonic solutions in a generalization of the Abelian Chern-Simons-Higgs model. Such a generalization introduces two different nonnegative functions, ω_1(|ϕ|) and ω(|ϕ|), which split the kinetic term of the Higgs field, |D_μϕ|"2→ω_1(|ϕ|)|D_0ϕ|"2-ω(|ϕ|)|D_kϕ|"2, breaking explicitly the Lorentz covariance. We have shown that a clean implementation of the Bogomolnyi procedure only can be implemented whether ω(|ϕ|)∝β|ϕ|"2"β"-"2 with β≥1. The self-dual or Bogomolnyi equations produce an infinity number of soliton solutions by choosing conveniently the generalizing function ω_1(|ϕ|) which must be able to provide a finite magnetic field. Also, we have shown that by properly choosing the generalizing functions it is possible to reproduce the Bogomolnyi equations of the Abelian Maxwell-Higgs and Chern-Simons-Higgs models. Finally, some new self-dual |ϕ|"6-vortex solutions have been analyzed from both theoretical and numerical point of view.
Groenendijk, M.; Dolman, A. J.; Ammann, C.; Arneth, A.; Cescatti, A.; Dragoni, D.; Gash, J. H. C.; Gianelle, D.; Gioli, B.; Kiely, G.; Knohl, A.; Law, B. E.; Lund, M.; Marcolla, B.; van der Molen, M. K.; Montagnani, L.; Moors, E.; Richardson, A. D.; Roupsard, O.; Verbeeck, H.; Wohlfahrt, G.
2011-12-01
Global vegetation models require the photosynthetic parameters, maximum carboxylation capacity (Vcm), and quantum yield (α) to parameterize their plant functional types (PFTs). The purpose of this work is to determine how much the scaling of the parameters from leaf to ecosystem level through a seasonally varying leaf area index (LAI) explains the parameter variation within and between PFTs. Using Fluxnet data, we simulate a seasonally variable LAIF for a large range of sites, comparable to the LAIM derived from MODIS. There are discrepancies when LAIF reach zero levels and LAIM still provides a small positive value. We find that temperature is the most common constraint for LAIF in 55% of the simulations, while global radiation and vapor pressure deficit are the key constraints for 18% and 27% of the simulations, respectively, while large differences in this forcing still exist when looking at specific PFTs. Despite these differences, the annual photosynthesis simulations are comparable when using LAIF or LAIM (r2 = 0.89). We investigated further the seasonal variation of ecosystem-scale parameters derived with LAIF. Vcm has the largest seasonal variation. This holds for all vegetation types and climates. The parameter α is less variable. By including ecosystem-scale parameter seasonality we can explain a considerable part of the ecosystem-scale parameter variation between PFTs. The remaining unexplained leaf-scale PFT variation still needs further work, including elucidating the precise role of leaf and soil level nitrogen.
Dolan, C.V.; Boomsma, D.I.; Neale, M.C.
1999-01-01
The contribution of size 3 and size 4 sibships to power in covariance structure modeling of a codominant QTL is investigated. Power calculations are based on the noncentral chi-square distribution. Sixteen sets of parameter values are considered. Results indicate that size 3 and size 4 sibships
Chen, Chen; Xie, Yuanchang
2014-12-01
Driving hours and rest breaks are closely related to driver fatigue, which is a major contributor to truck crashes. This study investigates the effects of driving hours and rest breaks on commercial truck driver safety. A discrete-time logistic regression model is used to evaluate the crash odds ratios of driving hours and rest breaks. Driving time is divided into 11 one hour intervals. These intervals and rest breaks are modeled as dummy variables. In addition, a Cox proportional hazards regression model with time-dependent covariates is used to assess the transient effects of rest breaks, which consists of a fixed effect and a variable effect. Data collected from two national truckload carriers in 2009 and 2010 are used. The discrete-time logistic regression result indicates that only the crash odds ratio of the 11th driving hour is statistically significant. Taking one, two, and three rest breaks can reduce drivers' crash odds by 68%, 83%, and 85%, respectively, compared to drivers who did not take any rest breaks. The Cox regression result shows clear transient effects for rest breaks. It also suggests that drivers may need some time to adjust themselves to normal driving tasks after a rest break. Overall, the third rest break's safety benefit is very limited based on the results of both models. The findings of this research can help policy makers better understand the impact of driving time and rest breaks and develop more effective rules to improve commercial truck safety. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
Yiu, Sean; Farewell, Vernon T; Tom, Brian D M
2018-02-01
In psoriatic arthritis, it is important to understand the joint activity (represented by swelling and pain) and damage processes because both are related to severe physical disability. The paper aims to provide a comprehensive investigation into both processes occurring over time, in particular their relationship, by specifying a joint multistate model at the individual hand joint level, which also accounts for many of their important features. As there are multiple hand joints, such an analysis will be based on the use of clustered multistate models. Here we consider an observation level random-effects structure with dynamic covariates and allow for the possibility that a subpopulation of patients is at minimal risk of damage. Such an analysis is found to provide further understanding of the activity-damage relationship beyond that provided by previous analyses. Consideration is also given to the modelling of mean sojourn times and jump probabilities. In particular, a novel model parameterization which allows easily interpretable covariate effects to act on these quantities is proposed.
Meier-Hirmer, Carolina; Schumacher, Martin
2013-06-20
The aim of this article is to propose several methods that allow to investigate how and whether the shape of the hazard ratio after an intermediate event depends on the waiting time to occurrence of this event and/or the sojourn time in this state. A simple multi-state model, the illness-death model, is used as a framework to investigate the occurrence of this intermediate event. Several approaches are shown and their advantages and disadvantages are discussed. All these approaches are based on Cox regression. As different time-scales are used, these models go beyond Markov models. Different estimation methods for the transition hazards are presented. Additionally, time-varying covariates are included into the model using an approach based on fractional polynomials. The different methods of this article are then applied to a dataset consisting of four studies conducted by the German Breast Cancer Study Group (GBSG). The occurrence of the first isolated locoregional recurrence (ILRR) is studied. The results contribute to the debate on the role of the ILRR with respect to the course of the breast cancer disease and the resulting prognosis. We have investigated different modelling strategies for the transition hazard after ILRR or in general after an intermediate event. Including time-dependent structures altered the resulting hazard functions considerably and it was shown that this time-dependent structure has to be taken into account in the case of our breast cancer dataset. The results indicate that an early recurrence increases the risk of death. A late ILRR increases the hazard function much less and after the successful removal of the second tumour the risk of death is almost the same as before the recurrence. With respect to distant disease, the appearance of the ILRR only slightly increases the risk of death if the recurrence was treated successfully. It is important to realize that there are several modelling strategies for the intermediate event and that
Activities on covariance estimation in Japanese Nuclear Data Committee
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
Stamovlasis, Dimitrios; Papageorgiou, George; Tsitsipis, Georgios; Tsikalas, Themistoklis; Vaiopoulou, Julie
2018-01-01
This paper illustrates two psychometric methods, latent class analysis (LCA) and taxometric analysis (TA) using empirical data from research probing children's mental representation in science learning. LCA is used to obtain a typology based on observed variables and to further investigate how the encountered classes might be related to external variables, where the effectiveness of classification process and the unbiased estimations of parameters become the main concern. In the step-wise LCA, the class membership is assigned and subsequently its relationship with covariates is established. This leading-edge modeling approach suffers from severe downward-biased estimations. The illustration of LCA is focused on alternative bias correction approaches and demonstrates the effect of modal and proportional class-membership assignment along with BCH and ML correction procedures. The illustration of LCA is presented with three covariates, which are psychometric variables operationalizing formal reasoning, divergent thinking and field dependence-independence, respectively. Moreover, taxometric analysis, a method designed to detect the type of the latent structural model, categorical or dimensional, is introduced, along with the relevant basic concepts and tools. TA was applied complementarily in the same data sets to answer the fundamental hypothesis about children's naïve knowledge on the matters under study and it comprises an additional asset in building theory which is fundamental for educational practices. Taxometric analysis provided results that were ambiguous as far as the type of the latent structure. This finding initiates further discussion and sets a problematization within this framework rethinking fundamental assumptions and epistemological issues.
Asymptotic Theory for the QMLE in GARCH-X Models with Stationary and Non-Stationary Covariates
Han, Heejoon; Kristensen, Dennis
as captured by its long-memory parameter dx; in particular, we allow for both stationary and non-stationary covariates. We show that the QMLE'’s of the regression coefficients entering the volatility equation are consistent and normally distributed in large samples independently of the degree of persistence....... This implies that standard inferential tools, such as t-statistics, do not have to be adjusted to the level of persistence. On the other hand, the intercept in the volatility equation is not identifi…ed when the covariate is non-stationary which is akin to the results of Jensen and Rahbek (2004, Econometric...
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
Castro, C. L.; Beltran-Przekurat, A. B.; Pielke, R. A.
2007-05-01
Previous work has established that the dominant modes of Pacific SSTs influence the summer climate of North America through large-scale forcing, and this effect is most pronounced during the early part of the season. It is hypothesized, then, that land surface influences become more dominant in the latter part of the season as remote teleconnection influences diminish. As a first step toward investigation of this hypothesis in a regional climate model (RCM) framework, the statistically signficant spatiotemporal patterns of variability and covariability in North American precipitation (specified by the standardized precipitation index, or SPI), soil moisture, and vegetation are determined for timescales from a month to six months. To specify these respective data we use: CPC gauge- derived precipitation (1950-2000), Variable Infiltration Capacity (VIC) Model and NOAH Model NLDAS soil moisture and temperature, and the Global Inventory Modeling and Mapping Studies Normalized Difference Vegetation Index (GIMMS-NDVI). The principal statistical tool used is multiple taper frequency singular value decomposition (MTM-SVD), and this is supplemented by wavelet analysis for specific areas of interest. The significant interannual variability in all of these data occur at a timescale of about 7 to 9 years and appears to be the integrated effect of remote SST forcing from the Pacific. Considering the entire year, the spatial pattern for precipitation resembles the typical ENSO winter signature. If the summer season is considered seperately, the out of phase relationship between precipitation anomalies in the central U.S. and core monsoon region is apparent. The largest soil moisture anomalies occur in the central U.S., since precipitation in this region has a consistent relationship to Pacific SSTs for the entire year. This helps to explain the approximately 20 year periodicity in drought conditions there. Unlike soil moisture, the largest anomalies in vegetation occur in the
Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng
2017-05-30
Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different
Székely, Gábor J.; Rizzo, Maria L.
2010-01-01
Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...
Kozlov, Daniil
2014-05-01
The topographical, soil and vegetation maps of FLUXNET study areas are widely used for interpretation of eddy covariance measurements, for calibration of biogeochemical models and for making regional assessments of carbon balance. The poster presents methodological problems and results of ecosystem mapping using GIS, remote sensing, statistical and field methods on the example of two RusFluxNet sites in the Central Forest (33° E, 56°30'N) and Central Chernozem (36°10' E, 51°36'N) reserves. In the Central Forest reserve tacheometric measurements were used for topographical and peat surveys of bogged sphagnum spruce forest of 20-hectare area. Its common borders and its areas affected by windfall were determined. The supplies and spatial distribution of organic matter were obtained. The datasets of groundwater monitoring measurements on ten wells were compared with each other and the analysis of spatial and temporal groundwater variability was performed. The map of typical ecosystems of the reserve and its surroundings was created on the basis of analysis of multi-temporal Landsat images. In the Central Chernozem reserve the GNSS topographical survey was used for flux tower footprint mapping (22 ha). The features of microrelief predetermine development of different soils within the footprint. Close relationship between soil (73 drilling site) and terrain attributes (DEM with 2.5 m) allowed to build maps of soils and soil properties: carbon content, bulk density, upper boundary of secondary carbonates. Position for chamber-based soil respiration measurements was defined on the basis of these maps. The detailed geodetic and soil surveys of virgin lands and plowland were performed in order to estimate the effect of agrogenic processes such as dehumification, compaction and erosion on soils during the whole period of agricultural use of Central Chernozem reserve area and around. The choice of analogous soils was based on the similarity of their position within the
Covariant diagrams for one-loop matching
Zhang, Zhengkang [Michigan Center for Theoretical Physics (MCTP), University of Michigan,450 Church Street, Ann Arbor, MI 48109 (United States); Deutsches Elektronen-Synchrotron (DESY),Notkestraße 85, 22607 Hamburg (Germany)
2017-05-30
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
Zhang, Zhengkang
2017-01-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Ginelli, Francesco; Politi, Antonio; Chaté, Hugues; Livi, Roberto
2013-01-01
Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi–Pasta–Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’. (paper)
Savron, V.I.; Skachkov, N.B.; Tyumenkov, G.Yu.
1982-01-01
A covariant three dimensional equation is derived for a wave function of a pseudoscalar particle, compoused of two equal mass quarks (quark and antiquark) with spins 1/2. This equation describes a relative motion of two quarks in π meson. An asymptotics of the solution of this equation is found in the momentum representation in the case of quarks interaction chosen in a form of a one gluon exchange amplitude [ru
Covariance Function for Nearshore Wave Assimilation Systems
2018-01-30
which is applicable for any spectral wave model. The four dimensional variational (4DVar) assimilation methods are based on the mathematical ...covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications , the covariance function depends primarily on...SPECTRAL ACTION DENSITY, RESPECTIVELY. ............................ 5 FIGURE 2. TOP ROW: STATISTICAL ANALYSIS OF THE WAVE-FIELD PROPERTIES AT THE
Treatment Effects with Many Covariates and Heteroskedasticity
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...
Chartin, Caroline; Stevens, Antoine; van Wesemael, Bas
2015-04-01
Providing spatially continuous Soil Organic Carbon data (SOC) is needed to support decisions regarding soil management, and inform the political debate with quantified estimates of the status and change of the soil resource. Digital Soil Mapping techniques are based on relations existing between a soil parameter (measured at different locations in space at a defined period) and relevant covariates (spatially continuous data) that are factors controlling soil formation and explaining the spatial variability of the target variable. This study aimed at apply DSM techniques to recent SOC content measurements (2005-2013) in three different landuses, i.e. cropland, grassland, and forest, in the Walloon region (Southern Belgium). For this purpose, SOC databases of two regional Soil Monitoring Networks (CARBOSOL for croplands and grasslands, and IPRFW for forests) were first harmonized, totalising about 1,220 observations. Median values of SOC content for croplands, grasslands, and forests, are respectively of 12.8, 29.0, and 43.1 g C kg-1. Then, a set of spatial layers were prepared with a resolution of 40 meters and with the same grid topology, containing environmental covariates such as, landuses, Digital Elevation Model and its derivatives, soil texture, C factor, carbon inputs by manure, and climate. Here, in addition to the three classical texture classes (clays, silt, and sand), we tested the use of clays + fine silt content (particles < 20 µm and related to stable carbon fraction) as soil covariate explaining SOC variations. For each of the three land uses (cropland, grassland and forest), a Generalized Additive Model (GAM) was calibrated on two thirds of respective dataset. The remaining samples were assigned to a test set to assess model performance. A backward stepwise procedure was followed to select the relevant environmental covariates using their approximate p-values (the level of significance was set at p < 0.05). Standard errors were estimated for each of
Dimitrios Stamovlasis
2018-04-01
Full Text Available This paper illustrates two psychometric methods, latent class analysis (LCA and taxometric analysis (TA using empirical data from research probing children's mental representation in science learning. LCA is used to obtain a typology based on observed variables and to further investigate how the encountered classes might be related to external variables, where the effectiveness of classification process and the unbiased estimations of parameters become the main concern. In the step-wise LCA, the class membership is assigned and subsequently its relationship with covariates is established. This leading-edge modeling approach suffers from severe downward-biased estimations. The illustration of LCA is focused on alternative bias correction approaches and demonstrates the effect of modal and proportional class-membership assignment along with BCH and ML correction procedures. The illustration of LCA is presented with three covariates, which are psychometric variables operationalizing formal reasoning, divergent thinking and field dependence-independence, respectively. Moreover, taxometric analysis, a method designed to detect the type of the latent structural model, categorical or dimensional, is introduced, along with the relevant basic concepts and tools. TA was applied complementarily in the same data sets to answer the fundamental hypothesis about children's naïve knowledge on the matters under study and it comprises an additional asset in building theory which is fundamental for educational practices. Taxometric analysis provided results that were ambiguous as far as the type of the latent structure. This finding initiates further discussion and sets a problematization within this framework rethinking fundamental assumptions and epistemological issues.
Bobrowski, Sebastian; Chen, Hong; Döring, Maik; Jensen, Uwe; Schinköthe, Wolfgang
2015-01-01
In practice manufacturers may have lots of failure data of similar products using the same technology basis under different operating conditions. Thus, one can try to derive predictions for the distribution of the lifetime of newly developed components or new application environments through the existing data using regression models based on covariates. Three categories of such regression models are considered: a parametric, a semiparametric and a nonparametric approach. First, we assume that the lifetime is Weibull distributed, where its parameters are modelled as linear functions of the covariate. Second, the Cox proportional hazards model, well-known in Survival Analysis, is applied. Finally, a kernel estimator is used to interpolate between empirical distribution functions. In particular the last case is new in the context of reliability analysis. We propose a goodness of fit measure (GoF), which can be applied to all three types of regression models. Using this GoF measure we discuss a new model selection procedure. To illustrate this method of reliability prediction, the three classes of regression models are applied to real test data of motor experiments. Further the performance of the approaches is investigated by Monte Carlo simulations. - Highlights: • We estimate the lifetime distribution in the presence of a covariate. • Three types of regression models are considered and compared. • A new nonparametric estimator based on our particular data structure is introduced. • We propose a goodness of fit measure and show a new model selection procedure. • A case study with real data and Monte Carlo simulations are performed
Uncertainty covariances in robotics applications
Smith, D.L.
1984-01-01
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
A class of covariate-dependent spatiotemporal covariance functions
Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.
2014-01-01
In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199
Competing risks and time-dependent covariates
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates...
Evaluation of covariance in theoretical calculation of nuclear data
Kikuchi, Yasuyuki
1981-01-01
Covariances of the cross sections are discussed on the statistical model calculations. Two categories of covariance are discussed: One is caused by the model approximation and the other by the errors in the model parameters. As an example, the covariances are calculated for 100 Ru. (author)
Covariant representations of nuclear *-algebras
Moore, S.M.
1978-01-01
Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations
Covariant Noncommutative Field Theory
Estrada-Jimenez, S [Licenciaturas en Fisica y en Matematicas, Facultad de Ingenieria, Universidad Autonoma de Chiapas Calle 4a Ote. Nte. 1428, Tuxtla Gutierrez, Chiapas (Mexico); Garcia-Compean, H [Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN P.O. Box 14-740, 07000 Mexico D.F., Mexico and Centro de Investigacion y de Estudios Avanzados del IPN, Unidad Monterrey Via del Conocimiento 201, Parque de Investigacion e Innovacion Tecnologica (PIIT) Autopista nueva al Aeropuerto km 9.5, Lote 1, Manzana 29, cp. 66600 Apodaca Nuevo Leon (Mexico); Obregon, O [Instituto de Fisica de la Universidad de Guanajuato P.O. Box E-143, 37150 Leon Gto. (Mexico); Ramirez, C [Facultad de Ciencias Fisico Matematicas, Universidad Autonoma de Puebla, P.O. Box 1364, 72000 Puebla (Mexico)
2008-07-02
The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.
Covariant Noncommutative Field Theory
Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.
2008-01-01
The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced
Covariant diagrams for one-loop matching
Zhang, Zhengkang
2016-10-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
On estimating cosmology-dependent covariance matrices
Morrison, Christopher B.; Schneider, Michael D.
2013-01-01
We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys
Collalti, A.; Marconi, S.; Ibrom, Andreas
2016-01-01
This study evaluates the performances of the new version (v.5.1) of 3D-CMCC Forest Ecosystem Model (FEM) in simulating gross primary productivity (GPP), against eddy covariance GPP data for 10 FLUXNET forest sites across Europe. A new carbon allocation module, coupled with new both phenological...... over Europe without a site-related calibration, the model has been deliberately parametrized with a single set of species-specific parametrizations for each forest ecosystem. The model consistently reproduces both in timing and in magnitude daily and monthly GPP variability across all sites...... sites we evaluate whether a more accurate representation of forest structural characteristics (i.e. cohorts, forest layers) and species composition can improve model results. In two of the three sites results reveal that model slightly increases its performances although, statistically speaking...
Ehler, Martin; Rajapakse, Vinodh; Zeeberg, Barry; Brooks, Brian; Brown, Jacob; Czaja, Wojciech; Bonner, Robert F.
The gene networks underlying closure of the optic fissure during vertebrate eye development are poorly understood. We used a novel clustering method based on Laplacian Eigenmaps, a nonlinear dimension reduction method, to analyze microarray data from laser capture microdissected (LCM) cells at the site and developmental stages (days 10.5 to 12.5) of optic fissure closure. Our new method provided greater biological specificity than classical clustering algorithms in terms of identifying more biological processes and functions related to eye development as defined by Gene Ontology at lower false discovery rates. This new methodology builds on the advantages of LCM to isolate pure phenotypic populations within complex tissues and allows improved ability to identify critical gene products expressed at lower copy number. The combination of LCM of embryonic organs, gene expression microarrays, and extracting spatial and temporal co-variations appear to be a powerful approach to understanding the gene regulatory networks that specify mammalian organogenesis.
Covariance data processing code. ERRORJ
Kosako, Kazuaki
2001-01-01
The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)
Forecasting Covariance Matrices: A Mixed Frequency Approach
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...
Angel, Yoseline
2016-10-25
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multitemporal data and spatio-Temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-Temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Angel, Yoseline; Houborg, Rasmus; McCabe, Matthew
2016-01-01
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multi-temporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Angel, Yoseline; Houborg, Rasmus; McCabe, Matthew F.
2016-10-01
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multitemporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Angel, Yoseline
2016-09-26
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multi-temporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Covariant electromagnetic field lines
Hadad, Y.; Cohen, E.; Kaminer, I.; Elitzur, A. C.
2017-08-01
Faraday introduced electric field lines as a powerful tool for understanding the electric force, and these field lines are still used today in classrooms and textbooks teaching the basics of electromagnetism within the electrostatic limit. However, despite attempts at generalizing this concept beyond the electrostatic limit, such a fully relativistic field line theory still appears to be missing. In this work, we propose such a theory and define covariant electromagnetic field lines that naturally extend electric field lines to relativistic systems and general electromagnetic fields. We derive a closed-form formula for the field lines curvature in the vicinity of a charge, and show that it is related to the world line of the charge. This demonstrates how the kinematics of a charge can be derived from the geometry of the electromagnetic field lines. Such a theory may also provide new tools in modeling and analyzing electromagnetic phenomena, and may entail new insights regarding long-standing problems such as radiation-reaction and self-force. In particular, the electromagnetic field lines curvature has the attractive property of being non-singular everywhere, thus eliminating all self-field singularities without using renormalization techniques.
Goldstein, G R
2001-01-01
Spin dependent fragmentation functions for heavy flavor quarks to fragment into heavy baryons are calculated in a quark-diquark model. The production of intermediate spin 1/2 and 3/2 excited states is explicity included. $\\Lambda_b$ , $\\Lambda_c$ and $\\Xi_c$ production rate and polarization at LEP energies are calculated and, where possible, compared with experiment. A different approach, also relying on a heavy quark-diquark model, is proposed for the small momentum transfer inclusive production of polarized heavy flavor hyperons. The predicted $\\Lambda_c$ polarization is roughly in agreement with experiment.
EMC and polarized EMC effects in nuclei
Cloet, I.C. [Special Research Centre for the Subatomic Structure of Matter and Department of Physics and Mathematical Physics, University of Adelaide, SA 5005 (Australia); Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)]. E-mail: icloet@jlab.org; Bentz, W. [Department of Physics, School of Science, Tokai University, Hiratsuka-shi, Kanagawa 259-1292 (Japan)]. E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Thomas, A.W. [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)]. E-mail: awthomas@jlab.org
2006-11-09
We determine nuclear structure functions and quark distributions for {sup 7}Li, {sup 11}B, {sup 15}N and {sup 27}Al. For the nucleon bound state we solve the covariant quark-diquark equations in a confining Nambu-Jona-Lasinio model, which yields excellent results for the free nucleon structure functions. The nucleus is described using a relativistic shell model, including mean scalar and vector fields that couple to the quarks in the nucleon. The nuclear structure functions are then obtained as a convolution of the structure function of the bound nucleon with the light-cone nucleon distributions. We find that we are readily able to reproduce the EMC effect in finite nuclei and confirm earlier nuclear matter studies that found a large polarized EMC effect.
Morenas, Vincent
1997-01-01
The study of semileptonic decays is of crucial importance for the physics of beauty. It was usually believed that the rates of these reactions were saturated by the channels leading to the production of ground state D and D * mesons only. Yet, experimental results have shown recently that the contribution of orbitally excited mesons are not that small. In these thesis it is presented a study of the semileptonic decays of B mesons into the first orbitally excited charmed states D ** : by using the formalism of Bakamjian-Thomas to construct the mesonic states, together with the hypothesis of infinite mass limit of the heavy quark, we provide a covariant description of the hadronic transition amplitude; moreover, all the 'good' properties of the heavy quark symmetries are naturally fulfilled. We then fixed the dynamics of the bound states of quarks by introducing four spectroscopic models and made numerical predictions, which are discussed and compared to other theoretical and experimental data when available. Finally, we also applied this formalism to the study of annihilation processes: the transition amplitude are then also written in a covariant way and the properties of heavy quark symmetries fulfilled. Numerical predictions of decay constants were made with the same four spectroscopic models. (author)
Schep, Daniel G.; Rubinstein, John L.
2016-01-01
Rotary ATPases couple ATP synthesis or hydrolysis to proton translocation across a membrane. However, understanding proton translocation has been hampered by a lack of structural information for the membrane-embedded a subunit. The V/A-ATPase from the eubacterium Thermus thermophilus is similar in structure to the eukaryotic V-ATPase but has a simpler subunit composition and functions in vivo to synthesize ATP rather than pump protons. We determined the T. thermophilus V/A-ATPase structure by cryo-EM at 6.4 Å resolution. Evolutionary covariance analysis allowed tracing of the a subunit sequence within the map, providing a complete model of the rotary ATPase. Comparing the membrane-embedded regions of the T. thermophilus V/A-ATPase and eukaryotic V-ATPase from Saccharomyces cerevisiae allowed identification of the α-helices that belong to the a subunit and revealed the existence of previously unknown subunits in the eukaryotic enzyme. Subsequent evolutionary covariance analysis enabled construction of a model of the a subunit in the S. cerevisae V-ATPase that explains numerous biochemical studies of that enzyme. Comparing the two a subunit structures determined here with a structure of the distantly related a subunit from the bovine F-type ATP synthase revealed a conserved pattern of residues, suggesting a common mechanism for proton transport in all rotary ATPases. PMID:26951669
Bourget, Antoine; Troost, Jan [Laboratoire de Physique Théorique, École Normale Supérieure, 24 rue Lhomond, 75005 Paris (France)
2016-03-23
We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N=(4,4) supersymmetry in two dimensions. For seed target spaces K3 and T{sup 4}, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Wang, T.; Brender, P.; Ciais, P.; Piao, S.; Mahecha, M.D.; Chevallier, F.; Reichstein, M.; Ottle, C.; Maignan, F.; Arain, A.; Bohrer, G.; Cescatti, A.; Kiely, G.; Law, B.E.; Lutz, M.; Montagnani, L.; Moors, E.J.
2012-01-01
Characterization of state-dependent model biases in land surface models can highlight model deficiencies, and provide new insights into model development. In this study, artificial neural networks (ANNs) are used to estimate the state-dependent biases of a land surface model (ORCHIDEE: ORganising
Connan, O; Maro, D; Hébert, D; Solier, L; Caldeira Ideas, P; Laguionie, P; St-Amant, N
2015-10-01
The behaviour of tritium in the environment is linked to the water cycle. We compare three methods of calculating the tritium evapotranspiration flux from grassland cover. The gradient and eddy covariance methods, together with a method based on the theoretical Penmann-Monteith model were tested in a study carried out in 2013 in an environment characterised by high levels of tritium activity. The results show that each of the three methods gave similar results. The various constraints applying to each method are discussed. The results show a tritium evapotranspiration flux of around 15 mBq m(-2) s(-1) in this environment. These results will be used to improve the entry parameters for the general models of tritium transfers in the environment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
Groenendijk, M.; Dolman, A.J.; Molen, van der M.K.; Leuning, R.; Arneth, A.; Delpierre, N.; Gash, J.H.C.; Lindroth, A.; Richardson, A.D.; Verbeeck, H.; Wohlfahrt, G.
2011-01-01
The vegetation component in climate models has advanced since the late 1960s from a uniform prescription of surface parameters to plant functional types (PFTs). PFTs are used in global land-surface models to provide parameter values for every model grid cell. With a simple photosynthesis model we
Evaluation of covariance for 238U cross sections
Kawano, Toshihiko; Nakamura, Masahiro; Matsuda, Nobuyuki; Kanda, Yukinori
1995-01-01
Covariances of 238 U are generated using analytic functions for representation of the cross sections. The covariances of the (n,2n) and (n,3n) reactions are derived with a spline function, while the covariances of the total and the inelastic scattering cross section are estimated with a linearized nuclear model calculation. (author)
Are your covariates under control? How normalization can re-introduce covariate effects.
Pain, Oliver; Dudbridge, Frank; Ronald, Angelica
2018-04-30
Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.
Covariate analysis of bivariate survival data
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Maeda, Tomohito; Yamada, Kenji; Oda, Masuho; Ishida, Shin
2010-01-01
We investigate the strong decays with one pseudoscalar emission of charmed strange mesons in the covariant oscillator quark model. The wave functions of composite sc-bar mesons are constructed as the irreducible representations of the U tilde (4) DS xO(3,1) L . Through the observed mass and results of decay study we discuss a novel assignment of observed charmed strange mesons from the viewpoint of the U tilde (4) DS x O(3,1) L -classification scheme. It is shown that D s0 * (2317) and D s1 (2460) are consistently explained as ground state chiralons, appeared in the U tilde (4) DS xO(3,1) L scheme. Furthermore, it is also found that recently-observed D s1 * (2710) could be described as first excited state chiralon. (author)
Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data
Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M
2006-01-01
The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.; Kleiber, William
2015-01-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Feenstra, T.L.; Postmus, D.; Quik, E.H.; Langendijk, H.; Krabbe, P.F.M.
Objectives: Recent ISPOR Good practice guidelines as well as literature encourage to use a single distribution rather than the latent failure approach to model time to event for patient level simulation models with multiple competing outcomes. Aim was to apply the preferred method of a single
Feenstra, T.L.; Postmus, D.; Quik, E.H.; Langendijk, H.; Krabbe, P.F.M.
2013-01-01
Objectives: Recent ISPOR Good practice guidelines as well as literature encourage to use a single distribution rather than the latent failure approach to model time to event for patient level simulation models with multiple competing outcomes. Aim was to apply the preferred method of a single
Covariant field equations in supergravity
Vanhecke, Bram [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium); Ghent University, Faculty of Physics, Gent (Belgium); Proeyen, Antoine van [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium)
2017-12-15
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Covariant field equations in supergravity
Vanhecke, Bram; Proeyen, Antoine van
2017-01-01
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Generally covariant gauge theories
Capovilla, R.
1992-01-01
A new class of generally covariant gauge theories in four space-time dimensions is investigated. The field variables are taken to be a Lie algebra valued connection 1-form and a scalar density. Modulo an important degeneracy, complex [euclidean] vacuum general relativity corresponds to a special case in this class. A canonical analysis of the generally covariant gauge theories with the same gauge group as general relativity shows that they describe two degrees of freedom per space point, qualifying therefore as a new set of neighbors of general relativity. The modification of the algebra of the constraints with respect to the general relativity case is computed; this is used in addressing the question of how general relativity stands out from its neighbors. (orig.)
The Bayesian Covariance Lasso.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G
2013-04-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.
Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.
2016-01-01
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…
Sebestyen, A.
1975-07-01
The principle of covariance is extended to coordinates corresponding to internal degrees of freedom. The conditions for a system to be isolated are given. It is shown how internal forces arise in such systems. Equations for internal fields are derived. By an interpretation of the generalized coordinates based on group theory it is shown how particles of the ordinary sense enter into the model and as a simple application the gravitational interaction of two pointlike particles is considered and the shift of the perihelion is deduced. (Sz.Z.)
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Fränzi Korner-Nievergelt
Full Text Available Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Bry, X; Verron, T; Cazes, P
2009-05-29
In this work, we consider chemical and physical variable groups describing a common set of observations (cigarettes). One of the groups, minor smoke compounds (minSC), is assumed to depend on the others (minSC predictors). PLS regression (PLSR) of m inSC on the set of all predictors appears not to lead to a satisfactory analytic model, because it does not take into account the expert's knowledge. PLS path modeling (PLSPM) does not use the multidimensional structure of predictor groups. Indeed, the expert needs to separate the influence of several pre-designed predictor groups on minSC, in order to see what dimensions this influence involves. To meet these needs, we consider a multi-group component-regression model, and propose a method to extract from each group several strong uncorrelated components that fit the model. Estimation is based on a global multiple covariance criterion, used in combination with an appropriate nesting approach. Compared to PLSR and PLSPM, the structural equation exploratory regression (SEER) we propose fully uses predictor group complementarity, both conceptually and statistically, to predict the dependent group.
Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M
2012-10-01
With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.
Lorentz Covariance of Langevin Equation
Koide, T.; Denicol, G.S.; Kodama, T.
2008-01-01
Relativistic covariance of a Langevin type equation is discussed. The requirement of Lorentz invariance generates an entanglement between the force and noise terms so that the noise itself should not be a covariant quantity. (author)
Andersson, C. David; Hillgren, J. Mikael; Lindgren, Cecilia; Qian, Weixing; Akfur, Christine; Berg, Lotta; Ekström, Fredrik; Linusson, Anna
2015-03-01
Scientific disciplines such as medicinal- and environmental chemistry, pharmacology, and toxicology deal with the questions related to the effects small organic compounds exhort on biological targets and the compounds' physicochemical properties responsible for these effects. A common strategy in this endeavor is to establish structure-activity relationships (SARs). The aim of this work was to illustrate benefits of performing a statistical molecular design (SMD) and proper statistical analysis of the molecules' properties before SAR and quantitative structure-activity relationship (QSAR) analysis. Our SMD followed by synthesis yielded a set of inhibitors of the enzyme acetylcholinesterase (AChE) that had very few inherent dependencies between the substructures in the molecules. If such dependencies exist, they cause severe errors in SAR interpretation and predictions by QSAR-models, and leave a set of molecules less suitable for future decision-making. In our study, SAR- and QSAR models could show which molecular sub-structures and physicochemical features that were advantageous for the AChE inhibition. Finally, the QSAR model was used for the prediction of the inhibition of AChE by an external prediction set of molecules. The accuracy of these predictions was asserted by statistical significance tests and by comparisons to simple but relevant reference models.
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Choi, Youn-Jeng; Alexeev, Natalia; Cohen, Allan S.
2015-01-01
The purpose of this study was to explore what may be contributing to differences in performance in mathematics on the Trends in International Mathematics and Science Study 2007. This was done by using a mixture item response theory modeling approach to first detect latent classes in the data and then to examine differences in performance on items…
Voronin I.
2016-01-01
Full Text Available Structural equation modelling (SEM has become an important tool in behaviour genetic research. The application of SEM for multivariate twin analysis allows revealing the structure of genetic and environmental factors underlying individual differences in human traits. We outline the framework of twin method and SEM, describe SEM implementation of a multivariate twin model and provide an example of a multivariate twin study. The study included 901 adolescent twin pairs from Russia. We measured general cognitive ability and characteristics of working memory and planning. The individual differences in working memory and planning were explained mostly by person-specific environment. The variability of intelligence is related to genes, family environment, and person specific environment. Moderate and weak associations between intelligence, working memory, and planning were entirely explained by shared environmental effects.
Distance covariance for stochastic processes
Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady
2017-01-01
The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...
ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities
Muir, D.W.
1989-01-01
File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Huang, Shengli; Liu, Heping; Dahal, Devendra; Jin, Suming; Welp, Lisa R.; Liu, Jinxun; Liu, Shuguang
2013-01-01
In interior Alaska, wildfires change gross primary production (GPP) after the initial disturbance. The impact of fires on GPP is spatially heterogeneous, which is difficult to evaluate by limited point-based comparisons or is insufficient to assess by satellite vegetation index. The direct prefire and postfire comparison is widely used, but the recovery identification may become biased due to interannual climate variability. The objective of this study is to propose a method to quantify the spatially explicit GPP change caused by fires and succession. We collected three Landsat images acquired on 13 July 2004, 5 August 2004, and 6 September 2004 to examine the GPP recovery of burned area from 1987 to 2004. A prefire Landsat image acquired in 1986 was used to reconstruct satellite images assuming that the fires of 1987–2004 had not occurred. We used a light-use efficiency model to estimate the GPP. This model was driven by maximum light-use efficiency (Emax) and fraction of photosynthetically active radiation absorbed by vegetation (FPAR). We applied this model to two scenarios (i.e., an actual postfire scenario and an assuming-no-fire scenario), where the changes in Emax and FPAR were taken into account. The changes in Emax were represented by the change in land cover of evergreen needleleaf forest, deciduous broadleaf forest, and shrub/grass mixed, whose Emax was determined from three fire chronosequence flux towers as 1.1556, 1.3336, and 0.5098 gC/MJ PAR. The changes in FPAR were inferred from NDVI change between the actual postfire NDVI and the reconstructed NDVI. After GPP quantification for July, August, and September 2004, we calculated the difference between the two scenarios in absolute and percent GPP changes. Our results showed rapid recovery of GPP post-fire with a 24% recovery immediately after burning and 43% one year later. For the fire scars with an age range of 2–17 years, the recovery rate ranged from 54% to 95%. In addition to the averaging
Remarks on Bousso's covariant entropy bound
Mayo, A E
2002-01-01
Bousso's covariant entropy bound is put to the test in the context of a non-singular cosmological solution of general relativity found by Bekenstein. Although the model complies with every assumption made in Bousso's original conjecture, the entropy bound is violated due to the occurrence of negative energy density associated with the interaction of some the matter components in the model. We demonstrate how this property allows for the test model to 'elude' a proof of Bousso's conjecture which was given recently by Flanagan, Marolf and Wald. This corroborates the view that the covariant entropy bound should be applied only to stable systems for which every matter component carries positive energy density.
Morenas, Vincent [Ecole Doctorale des Sciences Fondamentales, Universite Blaise Pascal, U.F.R. de Recherche Scientifique et Technique, F-63177 Aubiere (France)
1997-12-19
The study of semileptonic decays is of crucial importance for the physics of beauty. It was usually believed that the rates of these reactions were saturated by the channels leading to the production of ground state D and D{sup *} mesons only. Yet, experimental results have shown recently that the contribution of orbitally excited mesons are not that small. In these thesis it is presented a study of the semileptonic decays of B mesons into the first orbitally excited charmed states D{sup **}: by using the formalism of Bakamjian-Thomas to construct the mesonic states, together with the hypothesis of infinite mass limit of the heavy quark, we provide a covariant description of the hadronic transition amplitude; moreover, all the `good` properties of the heavy quark symmetries are naturally fulfilled. We then fixed the dynamics of the bound states of quarks by introducing four spectroscopic models and made numerical predictions, which are discussed and compared to other theoretical and experimental data when available. Finally, we also applied this formalism to the study of annihilation processes: the transition amplitude are then also written in a covariant way and the properties of heavy quark symmetries fulfilled. Numerical predictions of decay constants were made with the same four spectroscopic models. (author) 87 refs., 20 figs., 13 tabs.
Cooper, B Y; Flunker, L D; Johnson, R D; Nutter, T J
2018-08-01
Many veterans of Operation Desert Storm (ODS) struggle with the chronic pain of Gulf War Illness (GWI). Exposure to insecticides and pyridostigmine bromide (PB) have been implicated in the etiology of this multisymptom disease. We examined the influence of 3 (DEET (N,N-diethyl-meta-toluamide), permethrin, chlorpyrifos) or 4 GW agents (DEET, permethrin, chlorpyrifos, pyridostigmine bromide (PB)) on the post-exposure ambulatory and resting behaviors of rats. In three independent studies, rats that were exposed to all 4 agents consistently developed both immediate and delayed ambulatory deficits that persisted at least 16 weeks after exposures had ceased. Rats exposed to a 3 agent protocol (PB excluded) did not develop any ambulatory deficits. Cellular and molecular studies on nociceptors harvested from 16WP (weeks post-exposure) rats indicated that vascular nociceptor Na v 1.9 mediated currents were chronically potentiated following the 4 agent protocol but not following the 3 agent protocol. Muscarinic linkages to muscle nociceptor TRPA1 were also potentiated in the 4 agent but not the 3 agent, PB excluded, protocol. Although K v 7 activity changes diverged from the behavioral data, a K v 7 opener, retigabine, transiently reversed ambulation deficits. We concluded that PB played a critical role in the development of pain-like signs in a GWI rat model and that shifts in Na v 1.9 and TRPA1 activity were critical to the expression of these pain behaviors. Copyright © 2018 Elsevier Inc. All rights reserved.
Blanquier, E.
2013-01-01
To study the high energy nuclear physics and the associated phenomenon, as the quark-gluon plasma / hadronic matter phase transition, the Nambu and Jona-Lasinio model (NJL) appears as an interesting alternative to the Quantum Chromodynamics, not solvable at the considered energies. Indeed, the NJL model allows the description of quarks physics, at finite temperatures and densities. Furthermore, in order to try to correct a limitation of the NJL model, i.e. the absence of confinement, it was proposed a coupling of the quarks/antiquarks to a Polyakov loop, forming the PNJL model. The objective of this thesis is to see the possibilities offered by the NJL and PNJL models, to describe relevant sub-nuclear particles (quarks, mesons, diquarks and baryons), to study their interactions, and to proceed to a dynamical study involving these particles. After a recall of the useful tools, we modeled the u, d, s effective quarks and the mesons. Then, we described the baryons as quarks-diquarks bound states. A part of the work concerned the calculations of the cross-sections associated to the possible reactions implying these particles. Then, we incorporated these results in a computer code, in order to study the cooling of a quarks/antiquarks plasma and its hadronization. In this study, each particle evolves in a system in which the temperature and the densities are local parameters. We have two types of interactions: one due to the collisions, and the other is a remote interaction, notably between quarks. Finally, we studied the properties of our approach: qualities, limitations, and possible evolutions. (author)
Contributions to Large Covariance and Inverse Covariance Matrices Estimation
Kang, Xiaoning
2016-01-01
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...
Deriving covariant holographic entanglement
Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)
2016-11-07
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Networks of myelin covariance.
Melie-Garcia, Lester; Slater, David; Ruef, Anne; Sanabria-Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine
2018-04-01
Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, ). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these "networks of myelin covariance" (Myelin-Nets). The Myelin-Nets were built from quantitative Magnetization Transfer data-an in-vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin-Nets. We therefore selected two age groups: Young-Age (20-31 years old) and Old-Age (60-71 years old) and a pool of participants from 48 to 87 years old for a Myelin-Nets aging trajectory study. We found that the topological organization of the Myelin-Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin-Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z
2015-11-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.
COVARIANCE ASSISTED SCREENING AND ESTIMATION.
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-11-01
Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.
Development of covariance date for fast reactor cores. 3
Shibata, Keiichi; Hasegawa, Akira
1999-03-01
Covariances have been estimated for nuclear data contained in JENDL-3.2. As for Cr and Ni, the physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. In a case where evaluated data were based on experimental data, the covariances were estimated from the same experimental data. For cross section that had been evaluated by nuclear model calculations, the same model was applied to generate the covariances. The covariances obtained were compiled into ENDF-6 format files. The covariances, which had been prepared by the previous fiscal year, were re-examined, and some improvements were performed. Parts of Fe and 235 U covariances were updated. Covariances of nu-p and nu-d for 241 Pu and of fission neutron spectra for 233,235,238 U and 239,240 Pu were newly added to data files. (author)
General Galilei Covariant Gaussian Maps
Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo
2017-09-01
We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].
Fast Computing for Distance Covariance
Huo, Xiaoming; Szekely, Gabor J.
2014-01-01
Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...
Introduction to covariant formulation of superstring (field) theory
Anon.
1987-01-01
The author discusses covariant formulation of superstring theories based on BRS invariance. New formulation of superstring was constructed by Green and Schwarz in the light-cone gauge first and then a covariant action was discovered. The covariant action has some interesting geometrical interpretation, however, covariant quantizations are difficult to perform because of existence of local supersymmetries. Introducing extra variables into the action, a modified action has been proposed. However, it would be difficult to prescribe constraints to define a physical subspace, or to reproduce the correct physical spectrum. Hence the old formulation, i.e., the Neveu-Schwarz-Ramond (NSR) model for covariant quantization is used. The author begins by quantizing the NSR model in a covariant way using BRS charges. Then the author discusses the field theory of (free) superstring
Non-Critical Covariant Superstrings
Grassi, P A
2005-01-01
We construct a covariant description of non-critical superstrings in even dimensions. We construct explicitly supersymmetric hybrid type variables in a linear dilaton background, and study an underlying N=2 twisted superconformal algebra structure. We find similarities between non-critical superstrings in 2n+2 dimensions and critical superstrings compactified on CY_(4-n) manifolds. We study the spectrum of the non-critical strings, and in particular the Ramond-Ramond massless fields. We use the supersymmetric variables to construct the non-critical superstrings sigma-model action in curved target space backgrounds with coupling to the Ramond-Ramond fields. We consider as an example non-critical type IIA strings on AdS_2 background with Ramond-Ramond 2-form flux.
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
Multilevel maximum likelihood estimation with application to covariance matrices
Turčičová, Marie; Mandel, J.; Eben, Kryštof
Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
The covariant entropy bound in gravitational collapse
Gao, Sijie; Lemos, Jose P. S.
2004-01-01
We study the covariant entropy bound in the context of gravitational collapse. First, we discuss critically the heuristic arguments advanced by Bousso. Then we solve the problem through an exact model: a Tolman-Bondi dust shell collapsing into a Schwarzschild black hole. After the collapse, a new black hole with a larger mass is formed. The horizon, L, of the old black hole then terminates at the singularity. We show that the entropy crossing L does not exceed a quarter of the area of the old horizon. Therefore, the covariant entropy bound is satisfied in this process. (author)
Conformally covariant composite operators in quantum chromodynamics
Craigie, N.S.; Dobrev, V.K.; Todorov, I.T.
1983-03-01
Conformal covariance is shown to determine renormalization properties of composite operators in QCD and in the C 6 3 -model at the one-loop level. Its relevance to higher order (renormalization group improved) perturbative calculations in the short distance limit is also discussed. Light cone operator product expansions and spectral representations for wave functions in QCD are derived. (author)
Shestakova, Tatiana A; Aguilera, Mònica; Ferrio, Juan Pedro; Gutiérrez, Emilia; Voltas, Jordi
2014-08-01
Identifying how physiological responses are structured across environmental gradients is critical to understanding in what manner ecological factors determine tree performance. Here, we investigated the spatiotemporal patterns of signal strength of carbon isotope discrimination (Δ(13)C) and oxygen isotope composition (δ(18)O) for three deciduous oaks (Quercus faginea (Lam.), Q. humilis Mill. and Q. petraea (Matt.) Liebl.) and one evergreen oak (Q. ilex L.) co-occurring in Mediterranean forests along an aridity gradient. We hypothesized that contrasting strategies in response to drought would lead to differential climate sensitivities between functional groups. Such differential sensitivities could result in a contrasting imprint on stable isotopes, depending on whether the spatial or temporal organization of tree-ring signals was analysed. To test these hypotheses, we proposed a mixed modelling framework to group isotopic records into potentially homogeneous subsets according to taxonomic or geographical criteria. To this end, carbon and oxygen isotopes were modelled through different variance-covariance structures for the variability among years (at the temporal level) or sites (at the spatial level). Signal-strength parameters were estimated from the outcome of selected models. We found striking differences between deciduous and evergreen oaks in the organization of their temporal and spatial signals. Therefore, the relationships with climate were examined independently for each functional group. While Q. ilex exhibited a large spatial dependence of isotopic signals on the temperature regime, deciduous oaks showed a greater dependence on precipitation, confirming their higher susceptibility to drought. Such contrasting responses to drought among oak types were also observed at the temporal level (interannual variability), with stronger associations with growing-season water availability in deciduous oaks. Thus, our results indicate that Mediterranean deciduous
Pattey, Elizabeth; Jégo, Guillaume; Bourgeois, Gaétan
2010-05-01
Verifying the performance of process-based crop growth models to predict evapotranspiration and crop biomass is a key component of the adaptation of agricultural crop production to climate variations. STICS, developed by INRA, was part of the models selected by Agriculture and Agri-Food Canada to be implemented for environmental assessment studies on climate variations, because of its built-in ability to assimilate biophysical descriptors such as LAI derived from satellite imagery and its open architecture. The model prediction of shoot biomass was calibrated using destructive biomass measurements over one season, by adjusting six cultivar parameters and three generic plant parameters to define two grain corn cultivars adapted to the 1000-km long Mixedwood Plains ecozone. Its performance was then evaluated using a database of 40 years-sites of corn destructive biomass and yield. In this study we evaluate the temporal response of STICS evapotranspiration and biomass accumulation predictions against estimates using daily aggregated eddy covariance fluxes. The flux tower was located in an experimental farm south of Ottawa and measurements carried out over corn fields in 1995, 1996, 1998, 2000, 2002 and 2006. Daytime and nighttime fluxes were QC/QA and gap-filled separately. Soil respiration was partitioned to calculate the corn net daily CO2 uptake, which was converted into dry biomass. Out of the six growing seasons, three (1995, 1998, 2002) had water stress periods during corn grain filling. Year 2000 was cool and wet, while 1996 had heat and rainfall distributed evenly over the season and 2006 had a wet spring. STICS can predict evapotranspiration using either crop coefficients, when wind speed and air moisture are not available, or resistance. The first approach provided higher prediction for all the years than the resistance approach and the flux measurements. The dynamic of evapotranspiration prediction of STICS was very good for the growing seasons without
Covariant single-hole optical potential
Kam, J. de
1982-01-01
In this investigation a covariant optical potential model is constructed for scattering processes of mesons from nuclei in which the meson interacts repeatedly with one of the target nucleons. The nuclear binding interactions in the intermediate scattering state are consistently taken into account. In particular for pions and K - projectiles this is important in view of the strong energy dependence of the elementary projectile-nucleon amplitude. Furthermore, this optical potential satisfies unitarity and relativistic covariance. The starting point in our discussion is the three-body model for the optical potential. To obtain a practical covariant theory I formulate the three-body model as a relativistic quasi two-body problem. Expressions for the transition interactions and propagators in the quasi two-body equations are found by imposing the correct s-channel unitarity relations and by using dispersion integrals. This is done in such a way that the correct non-relativistic limit is obtained, avoiding clustering problems. Corrections to the quasi two-body treatment from the Pauli principle and the required ground-state exclusion are taken into account. The covariant equations that we arrive at are amenable to practical calculations. (orig.)
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Slater, David; Ruef, Anne; Sanabria‐Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine
2017-01-01
Abstract Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, 2013). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these “networks of myelin covariance” (Myelin‐Nets). The Myelin‐Nets were built from quantitative Magnetization Transfer data—an in‐vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin‐Nets. We therefore selected two age groups: Young‐Age (20–31 years old) and Old‐Age (60–71 years old) and a pool of participants from 48 to 87 years old for a Myelin‐Nets aging trajectory study. We found that the topological organization of the Myelin‐Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin‐Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. PMID:29271053
Liang, Wei; Lü, Yihe; Zhang, Weibin; Li, Shuai; Jin, Zhao; Ciais, Philippe; Fu, Bojie; Wang, Shuai; Yan, Jianwu; Li, Junyi; Su, Huimin
2017-07-01
Grassland ecosystems act as a crucial role in the global carbon cycle and provide vital ecosystem services for many species. However, these low-productivity and water-limited ecosystems are sensitive and vulnerable to climate perturbations and human intervention, the latter of which is often not considered due to lack of spatial information regarding the grassland management. Here by the application of a model tree ensemble (MTE-GRASS) trained on local eddy covariance data and using as predictors gridded climate and management intensity field (grazing and cutting), we first provide an estimate of global grassland gross primary production (GPP). GPP from our study compares well (modeling efficiency NSE = 0.85 spatial; NSE between 0.69 and 0.94 interannual) with that from flux measurement. Global grassland GPP was on average 11 ± 0.31 Pg C yr -1 and exhibited significantly increasing trend at both annual and seasonal scales, with an annual increase of 0.023 Pg C (0.2%) from 1982 to 2011. Meanwhile, we found that at both annual and seasonal scale, the trend (except for northern summer) and interannual variability of the GPP are primarily driven by arid/semiarid ecosystems, the latter of which is due to the larger variation in precipitation. Grasslands in arid/semiarid regions have a stronger (33 g C m -2 yr -1 /100 mm) and faster (0- to 1-month time lag) response to precipitation than those in other regions. Although globally spatial gradients (71%) and interannual changes (51%) in GPP were mainly driven by precipitation, where most regions with arid/semiarid climate zone, temperature and radiation together shared half of GPP variability, which is mainly distributed in the high-latitude or cold regions. Our findings and the results of other studies suggest the overwhelming importance of arid/semiarid regions as a control on grassland ecosystems carbon cycle. Similarly, under the projected future climate change, grassland ecosystems in these regions will
A scale invariant covariance structure on jet space
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2005-01-01
This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...
A three domain covariance framework for EEG/MEG data
Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.
2015-01-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three
Covariance Manipulation for Conjunction Assessment
Hejduk, M. D.
2016-01-01
The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.
Covariance matrices of experimental data
Perey, F.G.
1978-01-01
A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U
Evaluation and processing of covariance data
Wagner, M.
1993-01-01
These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)
A New Approach for Nuclear Data Covariance and Sensitivity Generation
Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.
2005-01-01
Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes
On Galilean covariant quantum mechanics
Horzela, A.; Kapuscik, E.; Kempczynski, J.; Joint Inst. for Nuclear Research, Dubna
1991-08-01
Formalism exhibiting the Galilean covariance of wave mechanics is proposed. A new notion of quantum mechanical forces is introduced. The formalism is illustrated on the example of the harmonic oscillator. (author)
Covariate-adjusted measures of discrimination for survival data
White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth
2015-01-01
by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...... statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...
Cross-covariance functions for multivariate random fields based on latent dimensions
Apanasovich, T. V.; Genton, M. G.
2010-01-01
The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable
Development of covariance capabilities in EMPIRE code
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Covariant holography of a tachyonic accelerating universe
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
Hua, Hairui; Burke, Danielle L; Crowther, Michael J; Ensor, Joie; Tudur Smith, Catrin; Riley, Richard D
2017-02-28
Stratified medicine utilizes individual-level covariates that are associated with a differential treatment effect, also known as treatment-covariate interactions. When multiple trials are available, meta-analysis is used to help detect true treatment-covariate interactions by combining their data. Meta-regression of trial-level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta-analyses are preferable to examine interactions utilizing individual-level information. However, one-stage IPD models are often wrongly specified, such that interactions are based on amalgamating within- and across-trial information. We compare, through simulations and an applied example, fixed-effect and random-effects models for a one-stage IPD meta-analysis of time-to-event data where the goal is to estimate a treatment-covariate interaction. We show that it is crucial to centre patient-level covariates by their mean value in each trial, in order to separate out within-trial and across-trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta-analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is -0.011 (95% CI: -0.019 to -0.003; p = 0.004), and thus highly significant, when amalgamating within-trial and across-trial information. However, when separating within-trial from across-trial information, the interaction is -0.007 (95% CI: -0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta-analysts should only use within-trial information to examine individual predictors of treatment effect and that one-stage IPD models should separate within-trial from across-trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd
Torsion and geometrostasis in covariant superstrings
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs
Torsion and geometrostasis in covariant superstrings
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Multiple feature fusion via covariance matrix for visual tracking
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Precomputing Process Noise Covariance for Onboard Sequential Filters
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
Duality ensures modular covariance
Li Miao; Yu Ming
1989-11-01
We show that the modular transformations for one point functions on the torus, S(n), satisfy the polynomial equations derived by Moore and Seiberg, provided the duality property of the model is ensured. The formula for S(n) is derived by us previously and should be valid for any conformal field theory. As a consequence, the full consistency conditions for modular invariance at higher genus are completely guaranteed by duality of the theory on the sphere. (orig.)
GLq(N)-covariant quantum algebras and covariant differential calculus
Isaev, A.P.; Pyatov, P.N.
1992-01-01
GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations are considered. It is that, up to some innessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. 25 refs
GLq(N)-covariant quantum algebras and covariant differential calculus
Isaev, A.P.; Pyatov, P.N.
1993-01-01
We consider GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations. We show that, up to some inessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. The connection with the bicovariant differential calculus on the linear quantum groups is discussed. (orig.)
Cosmic censorship conjecture revisited: covariantly
Hamid, Aymen I M; Goswami, Rituparno; Maharaj, Sunil D
2014-01-01
In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general locally rotationally symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible. (paper)
Pang, Yang [Columbia Univ., New York, NY (United States)]|[Brookhaven National Labs., Upton, NY (United States)
1997-09-22
Many phenomenological models for relativistic heavy ion collisions share a common framework - the relativistic Boltzmann equations. Within this framework, a nucleus-nucleus collision is described by the evolution of phase-space distributions of several species of particles. The equations can be effectively solved with the cascade algorithm by sampling each phase-space distribution with points, i.e. {delta}-functions, and by treating the interaction terms as collisions of these points. In between collisions, each point travels on a straight line trajectory. In most implementations of the cascade algorithm, each physical particle, e.g. a hadron or a quark, is often represented by one point. Thus, the cross-section for a collision of two points is just the cross-section of the physical particles, which can be quite large compared to the local density of the system. For an ultra-relativistic nucleus-nucleus collision, this could lead to a large violation of the Lorentz invariance. By using the invariance property of the Boltzmann equation under a scale transformation, a Lorentz invariant cascade algorithm can be obtained. The General Cascade Program - GCP - is a tool for solving the relativistic Boltzmann equation with any number of particle species and very general interactions with the cascade algorithm.
Evans, Nathan J; Steyvers, Mark; Brown, Scott D
2018-06-05
Understanding individual differences in cognitive performance is an important part of understanding how variations in underlying cognitive processes can result in variations in task performance. However, the exploration of individual differences in the components of the decision process-such as cognitive processing speed, response caution, and motor execution speed-in previous research has been limited. Here, we assess the heritability of the components of the decision process, with heritability having been a common aspect of individual differences research within other areas of cognition. Importantly, a limitation of previous work on cognitive heritability is the underlying assumption that variability in response times solely reflects variability in the speed of cognitive processing. This assumption has been problematic in other domains, due to the confounding effects of caution and motor execution speed on observed response times. We extend a cognitive model of decision-making to account for relatedness structure in a twin study paradigm. This approach can separately quantify different contributions to the heritability of response time. Using data from the Human Connectome Project, we find strong evidence for the heritability of response caution, and more ambiguous evidence for the heritability of cognitive processing speed and motor execution speed. Our study suggests that the assumption made in previous studies-that the heritability of cognitive ability is based on cognitive processing speed-may be incorrect. More generally, our methodology provides a useful avenue for future research in complex data that aims to analyze cognitive traits across different sources of related data, whether the relation is between people, tasks, experimental phases, or methods of measurement. © 2018 Cognitive Science Society, Inc.
Graphical representation of covariant-contravariant modal formulae
Miguel Palomino
2011-08-01
Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.
Covariance matrix estimation for stationary time series
Xiao, Han; Wu, Wei Biao
2011-01-01
We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Covariant Gauss law commutator anomaly
Dunne, G.V.; Trugenberger, C.A.; Massachusetts Inst. of Tech., Cambridge
1990-01-01
Using a (fixed-time) hamiltonian formalism we derive a covariant form for the anomaly in the commutator algebra of Gauss law generators for chiral fermions interacting with a dynamical non-abelian gauge field in 3+1 dimensions. (orig.)
Covariant gauges for constrained systems
Gogilidze, S.A.; Khvedelidze, A.M.; Pervushin, V.N.
1995-01-01
The method of constructing of extended phase space for singular theories which permits the consideration of covariant gauges without the introducing of a ghost fields, is proposed. The extension of the phase space is carried out by the identification of the initial theory with an equivalent theory with higher derivatives and applying to it the Ostrogradsky method of Hamiltonian description. 7 refs
Covarient quantization of heterotic strings in supersymmetric chiral boson formulation
Yu, F.
1992-01-01
This dissertation presents the covariant supersymmetric chiral boson formulation of the heterotic strings. The main feature of this formulation is the covariant quantization of the so-called leftons and rightons -- the (1,0) supersymmetric generalizations of the world-sheet chiral bosons -- that constitute basic building blocks of general heterotic-type string models. Although the (Neveu-Schwarz-Ramond or Green-Schwarz) heterotic strings provide the most realistic string models, their covariant quantization, with the widely-used Siegel formalism, has never been rigorously carried out. It is clarified in this dissertation that the covariant Siegel formalism is pathological upon quantization. As a test, a general classical covariant (NSR) heterotic string action that has the Siegel symmetry is constructed in arbitrary curved space-time coupled to (1,0) world-sheet super-gravity. In the light-cone gauge quantization, the critical dimensions are derived for such an action with leftons and rightons compactified on group manifolds G L x G R . The covariant quantization of this action does not agree with the physical results in the light-cone gauge quantization. This dissertation establishes a new formalism for the covariant quantization of heterotic strings. The desired consistent covariant path integral quantization of supersymmetric chiral bosons, and thus the general (NSR) heterotic-type strings with leftons and rightons compactified on torus circle-times d L S 1 x circle-times d R S 1 are carried out. An infinite set of auxiliary (1,0) scalar superfields is introduced to convert the second-class chiral constraint into first-class ones. The covariant gauge-fixed action has an extended BRST symmetry described by the graded algebra GL(1/1). A regularization respecting this symmetry is proposed to deal with the contributions of the infinite towers of auxiliary fields and associated ghosts
Cosmology of a covariant Galilean field.
De Felice, Antonio; Tsujikawa, Shinji
2010-09-10
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Covariant effective action for loop quantum cosmology from order reduction
Sotiriou, Thomas P.
2009-01-01
Loop quantum cosmology (LQC) seems to be predicting modified effective Friedmann equations without extra degrees of freedom. A puzzle arises if one decides to seek for a covariant effective action which would lead to the given Friedmann equation: The Einstein-Hilbert action is the only action that leads to second order field equations and, hence, there exists no covariant action which, under metric variation, leads to a modified Friedmann equation without extra degrees of freedom. It is shown that, at least for isotropic models in LQC, this issue is naturally resolved and a covariant effective action can be found if one considers higher order theories of gravity but faithfully follows effective field theory techniques. However, our analysis also raises doubts on whether a covariant description without background structures can be found for anisotropic models.
Using Covariant Lyapunov Vectors to Understand Spatiotemporal Chaos in Fluids
Paul, Mark; Xu, Mu; Barbish, Johnathon; Mukherjee, Saikat
2017-11-01
The spatiotemporal chaos of fluids present many difficult and fascinating challenges. Recent progress in computing covariant Lyapunov vectors for a variety of model systems has made it possible to probe fundamental ideas from dynamical systems theory including the degree of hyperbolicity, the fractal dimension, the dimension of the inertial manifold, and the decomposition of the dynamics into a finite number of physical modes and spurious modes. We are interested in building upon insights such as these for fluid systems. We first demonstrate the power of covariant Lyapunov vectors using a system of maps on a lattice with a nonlinear coupling. We then compute the covariant Lyapunov vectors for chaotic Rayleigh-Bénard convection for experimentally accessible conditions. We show that chaotic convection is non-hyperbolic and we quantify the spatiotemporal features of the spectrum of covariant Lyapunov vectors. NSF DMS-1622299 and DARPA/DSO Models, Dynamics, and Learning (MoDyL).
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Galaxy–galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez
2017-01-01
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Group covariance and metrical theory
Halpern, L.
1983-01-01
The a priori introduction of a Lie group of transformations into a physical theory has often proved to be useful; it usually serves to describe special simplified conditions before a general theory can be worked out. Newton's assumptions of absolute space and time are examples where the Euclidian group and translation group have been introduced. These groups were extended to the Galilei group and modified in the special theory of relativity to the Poincare group to describe physics under the given conditions covariantly in the simplest way. The criticism of the a priori character leads to the formulation of the general theory of relativity. The general metric theory does not really give preference to a particular invariance group - even the principle of equivalence can be adapted to a whole family of groups. The physical laws covariantly inserted into the metric space are however adapted to the Poincare group. 8 references
Phenotypic covariance at species' borders.
Caley, M Julian; Cripps, Edward; Game, Edward T
2013-05-28
Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species' borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species' borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future.
Visualization and assessment of spatio-temporal covariance properties
Huang, Huang
2017-11-23
Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.
A three domain covariance framework for EEG/MEG data.
Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C
2015-10-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.
Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.
2015-12-01
We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1
Proofs of Contracted Length Non-covariance
Strel'tsov, V.N.
1994-01-01
Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs
Structural Analysis of Covariance and Correlation Matrices.
Joreskog, Karl G.
1978-01-01
A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…
Construction of covariance matrix for experimental data
Liu Tingjin; Zhang Jianhua
1992-01-01
For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained
Elizângela Emídio Cunha
2010-09-01
Full Text Available The aim of this work was to investigate the short-term behavior of the genetic variability of quantitative traits simulated from models with additive and non-additive gene action in control and phenotypic selection populations. Both traits, one with low (h² = 0.10 and the other with high (h² = 0.60 heritability, were controlled by 600 biallelic loci. From a standard genome, it was obtained six genetic models which included the following: only the additive gene effects; complete and positive dominance for 25, 50, 75 and 100% of the loci; and positive overdominance for 50% of the loci. In the models with dominance deviation, the additive allelic effects were also included for 100% of the loci. Genetic variability was quantified from generation to generation using the genetic variance components. In the absence of selection, genotypic and additive genetic variances were higher. In the models with non-additive gene action, a small magnitude covariance component raised between the additive and dominance genetic effects whose correlation tended to be positive on the control population and negative under selection. Dominance variance increased as the number of loci with dominance deviation or the value of the deviation increased, implying on the increase in genotypic and additive genetic variances among the successive models.Objetivou-se estudar a variabilidade genética a curto prazo de características quantitativas simuladas a partir de modelos com ação gênica aditiva e não-aditiva em populações controle e de seleção fenotípica. As duas características, uma de baixa (h² = 0,10 e outra de alta (h² = 0,60 herdabilidade, foram controladas por 600 locos bialélicos. A partir de um genoma-padrão, foram obtidos seis modelos genéticos que incluíram: apenas efeitos aditivos dos genes; dominância completa e positiva para 25, 50, 75 e 100% dos locos; e sobredominância positiva para 50% dos locos. Nos modelos com desvio da dominância tamb
Lorentz covariant theory of gravitation
Fagundes, H.V.
1974-12-01
An alternative method for the calculation of second order effects, like the secular shift of Mercury's perihelium is developed. This method uses the basic ideas of thirring combined with the more mathematical approach of Feyman. In the case of a static source, the treatment used is greatly simplified. Besides, Einstein-Infeld-Hoffmann's Lagrangian for a system of two particles and spin-orbit and spin-spin interactions of two particles with classical spin, ie, internal angular momentum in Moller's sense, are obtained from the Lorentz covariant theory
Covariant gauges at finite temperature
Landshoff, Peter V
1992-01-01
A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler to use than the conventional one.
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Massive data compression for parameter-dependent covariance matrices
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
Piecewise linear regression splines with hyperbolic covariates
Cologne, John B.; Sposto, Richard
1992-09-01
Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)
Parity doubling in the baryon string model
Khokhlachev, S.B.
1990-01-01
The nature of parity doubling of baryon states with non-zero angular momentum is considered. The idea of explaining this phenomenon lies in the fact that the rotation of the gluon string leads to a centrifugal potential for quarks. The quarks on the string form a quark-diquark system. Quark tunneling from one end of the string to the other is not probable for systems with large angular momentum due to a large centrifugal potential, and the smallness of the underbarrier transition amplitude explains the small mass difference of the states with opposite parity. (orig.)
Covariance Evaluation Methodology for Neutron Cross Sections
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Mäntyniemi, Samu; Uusitalo, Laura; Peltonen, Heikki
2013-01-01
We developed a generic, age-structured, state-space stock assessment model that can be used as a platform for including information elicited from stakeholders. The model tracks the mean size-at-age and then uses it to explain rates of natural and ﬁshing mortality. The ﬁshery selectivity is divided...... to two components, which makes it possible to model the active seeking of the ﬂeet for certain sizes of ﬁsh, as well as the selectivity of the gear itself. The model can account for uncertainties that are not currently accounted for in state-of-the-art models for integrated assessments: (i) The form...... of the stock–recruitment function is considered uncertain and is accounted for by using Bayesian model averaging. (ii) In addition to recruitment variation, process variation in natural mortality, growth parameters, and ﬁshing mortality can also be treated as uncertain parameters...
Sparse reduced-rank regression with covariance estimation
Chen, Lisha
2014-12-08
Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.
Sparse reduced-rank regression with covariance estimation
Chen, Lisha; Huang, Jianhua Z.
2014-01-01
Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.
Covariance and correlation estimation in electron-density maps.
Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna
2012-03-01
Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Yunjun Yao; Shunlin Liang; Xianglan Li; Shaomin Liu; Jiquan Chen; Xiaotong Zhang; Kun Jia; Bo Jiang; Xianhong Xie; Simon Munier; Meng Liu; Jian Yu; Anders Lindroth; Andrej Varlagin; Antonio Raschi; Asko Noormets; Casimiro Pio; Georg Wohlfahrt; Ge Sun; Jean-Christophe Domec; Leonardo Montagnani; Magnus Lund; Moors Eddy; Peter D. Blanken; Thomas Grunwald; Sebastian Wolf; Vincenzo Magliulo
2016-01-01
The latent heat flux (LE) between the terrestrial biosphere and atmosphere is a major driver of the globalhydrological cycle. In this study, we evaluated LE simulations by 45 general circulation models (GCMs)in the Coupled Model Intercomparison Project Phase 5 (CMIP5) by a comparison...
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
Bayesian tests on components of the compound symmetry covariance matrix
Mulder, J.; Fox, J.P.
2013-01-01
Complex dependency structures are often conditionally modeled, where random effects parameters are used to specify the natural heterogeneity in the population. When interest is focused on the dependency structure, inferences can be made from a complex covariance matrix using a marginal modeling
Rotational covariance and light-front current matrix elements
Keister, B.D.
1994-01-01
Light-front current matrix elements for elastic scattering from hadrons with spin 1 or greater must satisfy a nontrivial constraint associated with the requirement of rotational covariance for the current operator. Using a model ρ meson as a prototype for hadronic quark models, this constraint and its implications are studied at both low and high momentum transfers. In the kinematic region appropriate for asymptotic QCD, helicity rules, together with the rotational covariance condition, yield an additional relation between the light-front current matrix elements
Estimating surface fluxes using eddy covariance and numerical ogive optimization
Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling
2015-01-01
Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low...
Cross-covariance functions for multivariate random fields based on latent dimensions
Apanasovich, T. V.
2010-02-16
The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable and computationally feasible classes of cross-covariance functions in closed form. We focus on spatio-temporal cross-covariance functions that can be nonseparable, asymmetric and can have different covariance structures, for instance different smoothness parameters, in each component. We discuss estimation of these models and perform a small simulation study to demonstrate our approach. We illustrate our methodology on a trivariate spatio-temporal pollution dataset from California and demonstrate that our cross-covariance performs better than other competing models. © 2010 Biometrika Trust.
Extended covariance data formats for the ENDF/B-VI differential data evaluation
Peelle, R.W.; Muir, D.W.
1988-01-01
The ENDF/B-V included cross section covariance data, but covariances could not be encoded for all the important data types. New ENDF-6 covariance formats are outlined including those for cross-file (MF) covariances, resonance parameters over the whole range, and secondary energy and angle distributions. One ''late entry'' format encodes covariance data for cross sections that are output from model or fitting codes in terms of the model parameter covariance matrix and the tabulated derivatives of cross sections with respect to the model parameters. Another new format yields multigroup cross section variances that increase as the group width decreases. When evaluators use the new formats, the files can be processed and used for improved uncertainty propagation and data combination. 22 refs
Covariance problem in two-dimensional quantum chromodynamics
Hagen, C.R.
1979-01-01
The problem of covariance in the field theory of a two-dimensional non-Abelian gauge field is considered. Since earlier work has shown that covariance fails (in charged sectors) for the Schwinger model, particular attention is given to an evaluation of the role played by the non-Abelian nature of the fields. In contrast to all earlier attempts at this problem, it is found that the potential covariance-breaking terms are identical to those found in the Abelian theory provided that one expresses them in terms of the total (i.e., conserved) current operator. The question of covariance is thus seen to reduce in all cases to a determination as to whether there exists a conserved global charge in the theory. Since the charge operator in the Schwinger model is conserved only in neutral sectors, one is thereby led to infer a probable failure of covariance in the non-Abelian theory, but one which is identical to that found for the U(1) case
Malerba, Paola; Straudi, Sofia; Fregni, Felipe; Bazhenov, Maxim; Basaglia, Nino
2017-01-01
Stroke is a leading cause of worldwide disability, and up to 75% of survivors suffer from some degree of arm paresis. Recently, rehabilitation of stroke patients has focused on recovering motor skills by taking advantage of use-dependent neuroplasticity, where high-repetition of goal-oriented movement is at times combined with non-invasive brain stimulation, such as transcranial direct current stimulation (tDCS). Merging the two approaches is thought to provide outlasting clinical gains, by enhancing synaptic plasticity and motor relearning in the motor cortex primary area. However, this general approach has shown mixed results across the stroke population. In particular, stroke location has been found to correlate with the likelihood of success, which suggests that different patients might require different protocols. Understanding how motor rehabilitation and stimulation interact with ongoing neural dynamics is crucial to optimize rehabilitation strategies, but it requires theoretical and computational models to consider the multiple levels at which this complex phenomenon operate. In this work, we argue that biophysical models of cortical dynamics are uniquely suited to address this problem. Specifically, biophysical models can predict treatment efficacy by introducing explicit variables and dynamics for damaged connections, changes in neural excitability, neurotransmitters, neuromodulators, plasticity mechanisms, and repetitive movement, which together can represent brain state, effect of incoming stimulus, and movement-induced activity. In this work, we hypothesize that effects of tDCS depend on ongoing neural activity and that tDCS effects on plasticity may be also related to enhancing inhibitory processes. We propose a model design for each step of this complex system, and highlight strengths and limitations of the different modeling choices within our approach. Our theoretical framework proposes a change in paradigm, where biophysical models can contribute
ISSUES IN NEUTRON CROSS SECTION COVARIANCES
Mattoon, C.M.; Oblozinsky,P.
2010-04-30
We review neutron cross section covariances in both the resonance and fast neutron regions with the goal to identify existing issues in evaluation methods and their impact on covariances. We also outline ideas for suitable covariance quality assurance procedures.We show that the topic of covariance data remains controversial, the evaluation methodologies are not fully established and covariances produced by different approaches have unacceptable spread. The main controversy is in very low uncertainties generated by rigorous evaluation methods and much larger uncertainties based on simple estimates from experimental data. Since the evaluators tend to trust the former, while the users tend to trust the latter, this controversy has considerable practical implications. Dedicated effort is needed to arrive at covariance evaluation methods that would resolve this issue and produce results accepted internationally both by evaluators and users.
Improvement of covariance data for fast reactors
Shibata, Keiichi; Hasegawa, Akira
2000-02-01
We estimated covariances of the JENDL-3.2 data on the nuclides and reactions needed to analyze fast-reactor cores for the past three years, and produced covariance files. The present work was undertaken to re-examine the covariance files and to make some improvements. The covariances improved are the ones for the inelastic scattering cross section of 16 O, the total cross section of 23 Na, the fission cross section of 235 U, the capture cross section of 238 U, and the resolved resonance parameters for 238 U. Moreover, the covariances of 233 U data were newly estimated by the present work. The covariances obtained were compiled in the ENDF-6 format. (author)
Spectroscopy of doubly heavy baryons
Gershtein, S.S.; Kiselev, V.V.; Likhoded, A.K.; Onishchenko, A.I.
2000-01-01
Within a nonrelativistic quark model featuring a QCD-motivated Buchmueller-Tye potential, the mass spectra for the families of doubly heavy baryons are calculated by assuming the quark-diquark structure of the baryon wave functions and by taking into account spin-dependent splitting. Physically motivated evidence that, in the case where heavy quarks have identical flavors, quasistationary excited states may be formed in the heavy-diquark subsystem is analyzed
Phase transition from nuclear matter to color superconducting quark matter
Bentz, W. E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Horikawa, T.; Ishii, N.; Thomas, A.W
2003-06-02
We construct the nuclear and quark matter equations of state at zero temperature in an effective quark theory (the Nambu-Jona-Lasinio model), and discuss the phase transition between them. The nuclear matter equation of state is based on the quark-diquark description of the single nucleon, while the quark matter equation of state includes the effects of scalar diquark condensation (color superconductivity). The effect of diquark condensation on the phase transition is discussed in detail.
Leading Twist GPDs and Transverse Spin Densities in a Proton
Mondal, Chandan; Maji, Tanmay; Chakrabarti, Dipankar; Zhao, Xingbo
2018-05-01
We present a study of both chirally even and odd generalized parton distributions in the leading twist for the quarks in a proton using the light-front wavefunctions of a quark-diquark model predicted by the holographic QCD. For transversely polarized proton, both chiral even and chiral odd GPDs contribute to the spin densities which are related to the GPDs in transverse impact parameter space. Here, we also present a study of the spin densities for transversely polarized quark and proton.
Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
2017-01-10
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.
ANL Critical Assembly Covariance Matrix Generation - Addendum
McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-01-13
In March 2012, a report was issued on covariance matrices for Argonne National Laboratory (ANL) critical experiments. That report detailed the theory behind the calculation of covariance matrices and the methodology used to determine the matrices for a set of 33 ANL experimental set-ups. Since that time, three new experiments have been evaluated and approved. This report essentially updates the previous report by adding in these new experiments to the preceding covariance matrix structure.
Neutron spectrum adjustment. The role of covariances
Remec, I.
1992-01-01
Neutron spectrum adjustment method is shortly reviewed. Practical example dealing with power reactor pressure vessel exposure rates determination is analysed. Adjusted exposure rates are found only slightly affected by the covariances of measured reaction rates and activation cross sections, while the multigroup spectra covariances were found important. Approximate spectra covariance matrices, as suggested in Astm E944-89, were found useful but care is advised if they are applied in adjustments of spectra at locations without dosimetry. (author) [sl
Modifications of Sp(2) covariant superfield quantization
Gitman, D.M.; Moshin, P.Yu
2003-12-04
We propose a modification of the Sp(2) covariant superfield quantization to realize a superalgebra of generating operators isomorphic to the massless limit of the corresponding superalgebra of the osp(1,2) covariant formalism. The modified scheme ensures the compatibility of the superalgebra of generating operators with extended BRST symmetry without imposing restrictions eliminating superfield components from the quantum action. The formalism coincides with the Sp(2) covariant superfield scheme and with the massless limit of the osp(1,2) covariant quantization in particular cases of gauge-fixing and solutions of the quantum master equations.
Activities of covariance utilization working group
Tsujimoto, Kazufumi
2013-01-01
During the past decade, there has been a interest in the calculational uncertainties induced by nuclear data uncertainties in the neutronics design of advanced nuclear system. The covariance nuclear data is absolutely essential for the uncertainty analysis. In the latest version of JENDL, JENDL-4.0, the covariance data for many nuclides, especially actinide nuclides, was substantialy enhanced. The growing interest in the uncertainty analysis and the covariance data has led to the organisation of the working group for covariance utilization under the JENDL committee. (author)
Covariant entropy bound and loop quantum cosmology
Ashtekar, Abhay; Wilson-Ewing, Edward
2008-01-01
We examine Bousso's covariant entropy bound conjecture in the context of radiation filled, spatially flat, Friedmann-Robertson-Walker models. The bound is violated near the big bang. However, the hope has been that quantum gravity effects would intervene and protect it. Loop quantum cosmology provides a near ideal setting for investigating this issue. For, on the one hand, quantum geometry effects resolve the singularity and, on the other hand, the wave function is sharply peaked at a quantum corrected but smooth geometry, which can supply the structure needed to test the bound. We find that the bound is respected. We suggest that the bound need not be an essential ingredient for a quantum gravity theory but may emerge from it under suitable circumstances.
Cherchi, Elisabetta; Guevara, Cristian Angelo
2012-01-01
of parameters increases is usually known as the “curse of dimensionality” in the simulation methods. We investigate this problem in the case of the random coefficients Logit model. We compare the traditional Maximum Simulated Likelihood (MSL) method with two alternative estimation methods: the Expectation......–Maximization (EM) and the Laplace Approximation (HH) methods that do not require simulation. We use Monte Carlo experimentation to investigate systematically the performance of the methods under different circumstances, including different numbers of variables, sample sizes and structures of the variance...
Cross-covariance based global dynamic sensitivity analysis
Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng
2018-02-01
For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
A covariant canonical description of Liouville field theory
Papadopoulos, G.; Spence, B.
1993-03-01
This paper presents a new parametrisation of the space of solutions of Liouville field theory on a cylinder. In this parametrisation, the solutions are well-defined and manifestly real functions over all space-time and all of parameter space. It is shown that the resulting covariant phase space of the Liouville theory is diffeomorphic to the Hamiltonian one, and to the space of initial data of the theory. The Poisson brackets are derived and shown to be those of the co-tangent bundle of the loop group of the real line. Using Hamiltonian reduction, it is shown that this covariant phase space formulation of Liouville theory may also be obtained from the covariant phase space formulation of the Wess-Zumino-Witten model. 19 refs
General covariance and quantum theory
Mashhoon, B.
1986-01-01
The extension of the principle of relativity to general coordinate systems is based on the hypothesis that an accelerated observer is locally equivalent to a hypothetical inertial observer with the same velocity as the noninertial observer. This hypothesis of locality is expected to be valid for classical particle phenomena as well as for classical wave phenomena but only in the short-wavelength approximation. The generally covariant theory is therefore expected to be in conflict with the quantum theory which is based on wave-particle duality. This is explicitly demonstrated for the frequency of electromagnetic radiation measured by a uniformly rotating observer. The standard Doppler formula is shown to be valid only in the geometric optics approximation. A new definition for the frequency is proposed, and the resulting formula for the frequency measured by the rotating observer is shown to be consistent with expectations based on the classical theory of electrons. A tentative quantum theory is developed on the basis of the generalization of the Bohr frequency condition to include accelerated observers. The description of the causal sequence of events is assumed to be independent of the motion of the observer. Furthermore, the quantum hypothesis is supposed to be valid for all observers. The implications of this theory are critically examined. The new formula for frequency, which is still based on the hypothesis of locality, leads to the observation of negative energy quanta by the rotating observer and is therefore in conflict with the quantum theory
Noisy covariance matrices and portfolio optimization II
Pafka, Szilárd; Kondor, Imre
2003-03-01
Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the
Semiparametric estimation of covariance matrices for longitudinal data.
Fan, Jianqing; Wu, Yichao
2008-12-01
Estimation of longitudinal data covariance structure poses significant challenges because the data are usually collected at irregular time points. A viable semiparametric model for covariance matrices was proposed in Fan, Huang and Li (2007) that allows one to estimate the variance function nonparametrically and to estimate the correlation function parametrically via aggregating information from irregular and sparse data points within each subject. However, the asymptotic properties of their quasi-maximum likelihood estimator (QMLE) of parameters in the covariance model are largely unknown. In the current work, we address this problem in the context of more general models for the conditional mean function including parametric, nonparametric, or semi-parametric. We also consider the possibility of rough mean regression function and introduce the difference-based method to reduce biases in the context of varying-coefficient partially linear mean regression models. This provides a more robust estimator of the covariance function under a wider range of situations. Under some technical conditions, consistency and asymptotic normality are obtained for the QMLE of the parameters in the correlation function. Simulation studies and a real data example are used to illustrate the proposed approach.
Conservation laws and covariant equations of motion for spinning particles
Obukhov, Yuri N.; Puetzfeld, Dirk
2015-01-01
We derive the Noether identities and the conservation laws for general gravitational models with arbitrarily interacting matter and gravitational fields. These conservation laws are used for the construction of the covariant equations of motion for test bodies with minimal and nonminimal coupling.
Comparing fixed effects and covariance structure estimators for panel data
Ejrnæs, Mette; Holm, Anders
2006-01-01
In this article, the authors compare the traditional econometric fixed effect estimator with the maximum likelihood estimator implied by covariance structure models for panel data. Their findings are that the maximum like lipoid estimator is remarkably robust to certain types of misspecifications...
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-03-13
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Empirical State Error Covariance Matrix for Batch State Estimation
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Parameters of the covariance function of galaxies
Fesenko, B.I.; Onuchina, E.V.
1988-01-01
The two-point angular covariance functions for two samples of galaxies are considered using quick methods of analysis. It is concluded that in the previous investigations the amplitude of the covariance function in the Lick counts was overestimated and the rate of decrease of the function underestimated
Covariance and sensitivity data generation at ORNL
Leal, L. C.; Derrien, H.; Larson, N. M.; Alpan, A.
2005-01-01
Covariance data are required to assess uncertainties in design parameters in several nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the US Evaluated Nuclear Data Library, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. In this paper we address the generation of covariance data in the resonance region done with the computer code SAMMY. SAMMY is used in the evaluation of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on the generalised least-squares formalism (Bayesian theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, it provides the resonance parameter covariances. For resonance parameter evaluations where there are no resonance parameter covariance data available, the alternative is to use an approach called the 'retroactive' resonance parameter covariance generation. In this paper, we describe the application of the retroactive covariance generation approach for the gadolinium isotopes. (authors)
Position Error Covariance Matrix Validation and Correction
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Quality Quantification of Evaluated Cross Section Covariances
Varet, S.; Dossantos-Uzarralde, P.; Vayatis, N.
2015-01-01
Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the 85 Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations
On the algebraic structure of covariant anomalies and covariant Schwinger terms
Kelnhofer, G.
1992-01-01
A cohomological characterization of covariant anomalies and covariant Schwinger terms in an anomalous Yang-Mills theory is formulated and w ill be geometrically interpreted. The BRS and anti-BRS transformations are defined as purely differential geometric objects. Finally the covariant descent equations are formulated within this context. (author)
The Goodness of Covariance Selection Problem from AUC Bounds
Khajavi, Navid Tafaghodi; Kuh, Anthony
2016-01-01
We conduct a study of graphical models and discuss the quality of model selection approximation by formulating the problem as a detection problem and examining the area under the curve (AUC). We are specifically looking at the model selection problem for jointly Gaussian random vectors. For Gaussian random vectors, this problem simplifies to the covariance selection problem which is widely discussed in literature by Dempster [1]. In this paper, we give the definition for the correlation appro...
Covariance descriptor fusion for target detection
Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih
2016-05-01
Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.
Structure functions of hadrons in the QCD effective theory
Shigetani, Takayuki
1996-01-01
We study the structure functions of hadrons with the low energy effective theory of QCD. We try to clarify a link between the low energy effective theory, where non-perturbative dynamics is essential, and the high energy deep inelastic scattering experiment. We calculate the leading twist matrix elements of the structure function at the low energy model scale within the effective theory. Calculated structure functions are evoluted to the high momentum scale with the help of the perturbative QCD, and compared with the experimental data. Through the comparison of the model calculations with the experiment, we discuss how the non-perturbative dynamics of the effective theory is reflected in the deep inelastic phenomena. We first evaluate the structure functions of the pseudoscalar mesons using the NJL model. The resulting structure functions show reasonable agreements with experiments. We study then the quark distribution functions of the nucleon using a covariant quark-diquark model. We calculate three leading twist distribution functions, spin-independent f 1 (x), longitudinal spin distribution g 1 (x), and chiral-odd transversity spin distribution h 1 (x). The results for f 1 (x) and g 1 (x) turn out to be consistent with available experiments because of the strong spin-0 diquark correlation. (author)
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Sato, Masanori; Matsubara, Takahiko; Takada, Masahiro; Hamana, Takashi
2011-01-01
Using 1000 ray-tracing simulations for a Λ-dominated cold dark model in Sato et al., we study the covariance matrix of cosmic shear correlation functions, which is the standard statistics used in previous measurements. The shear correlation function of a particular separation angle is affected by Fourier modes over a wide range of multipoles, even beyond a survey area, which complicates the analysis of the covariance matrix. To overcome such obstacles we first construct Gaussian shear simulations from the 1000 realizations and then use the Gaussian simulations to disentangle the Gaussian covariance contribution to the covariance matrix we measured from the original simulations. We found that an analytical formula of Gaussian covariance overestimates the covariance amplitudes due to an effect of the finite survey area. Furthermore, the clean separation of the Gaussian covariance allows us to examine the non-Gaussian covariance contributions as a function of separation angles and source redshifts. For upcoming surveys with typical source redshifts of z s = 0.6 and 1.0, the non-Gaussian contribution to the diagonal covariance components at 1 arcmin scales is greater than the Gaussian contribution by a factor of 20 and 10, respectively. Predictions based on the halo model qualitatively well reproduce the simulation results, however show a sizable disagreement in the covariance amplitudes. By combining these simulation results we develop a fitting formula to the covariance matrix for a survey with arbitrary area coverage, taking into account effects of the finiteness of survey area on the Gaussian covariance.
Covariance specification and estimation to improve top-down Green House Gas emission estimates
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.
2015-12-01
The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve
Veerkamp, R.F.; Goddard, M.E.
1998-01-01
Multiple-trait BLUP evaluations of test day records require a large number of genetic parameters. This study estimated covariances with a reduced model that included covariance functions in two dimensions (stage of lactation and herd production level) and all three yield traits. Records came from
Covariant constraints for generic massive gravity and analysis of its characteristics
Deser, S.; Sandora, M.; Waldron, A.
2014-01-01
We perform a covariant constraint analysis of massive gravity valid for its entire parameter space, demonstrating that the model generically propagates 5 degrees of freedom; this is also verified by a new and streamlined Hamiltonian description. The constraint's covariant expression permits...
A covariant form of the Maxwell's equations in four-dimensional spaces with an arbitrary signature
Lukac, I.
1991-01-01
The concept of duality in the four-dimensional spaces with the arbitrary constant metric is strictly mathematically formulated. A covariant model for covariant and contravariant bivectors in this space based on three four-dimensional vectors is proposed. 14 refs
Predicting kidney graft failure using time-dependent renal function covariates
de Bruijne, Mattheus H. J.; Sijpkens, Yvo W. J.; Paul, Leendert C.; Westendorp, Rudi G. J.; van Houwelingen, Hans C.; Zwinderman, Aeilko H.
2003-01-01
Chronic rejection and recurrent disease are the major causes of late graft failure in renal transplantation. To assess outcome, most researchers use Cox proportional hazard analysis with time-fixed covariates. We developed a model adding time-dependent renal function covariates to improve the
Covariant quantizations in plane and curved spaces
Assirati, J.L.M.; Gitman, D.M.
2017-01-01
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Covariant quantizations in plane and curved spaces
Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)
2017-07-15
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Extracting the Omega- electric quadrupole moment from lattice QCD data
G. Ramalho, M.T. Pena
2011-03-01
The Omega- has an extremely long lifetime, and is the most stable of the baryons with spin 3/2. Therefore the Omega- magnetic moment is very accurately known. Nevertheless, its electric quadrupole moment was never measured, although estimates exist in different formalisms. In principle, lattice QCD simulations provide at present the most appropriate way to estimate the Omega- form factors, as function of the square of the transferred four-momentum, Q2, since it describes baryon systems at the physical mass for the strange quark. However, lattice QCD form factors, and in particular GE2, are determined at finite Q2 only, and the extraction of the electric quadrupole moment, Q_Omega= GE2(0) e/(2 M_Omega), involves an extrapolation of the numerical lattice results. In this work we reproduce the lattice QCD data with a covariant spectator quark model for Omega- which includes a mixture of S and two D states for the relative quark-diquark motion. Once the model is calibrated, it is used to determine Q_Omega. Our prediction is Q_Omega= (0.96 +/- 0.02)*10^(-2) efm2 [GE2(0)=0.680 +/- 0.012].
Smith, D.L.
1988-01-01
The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurement standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs
Luiz Felipe Waihrich Guterres
2007-06-01
this research was to study the effect of accouting for the covariance between the additive genetic direct and the maternal effects (covd-m on the estimates of genetic parameters and on predictions of genetic values (VG, for average daily gain from birth to weaning (GMDND and from weaning to 550 days of age (GMDDS. They were analyzed 28,949 records for GMDND and 11,884 for GMDDS of a Brangus breed population (58 Angus x 3/8 Nellore, collected from 1986 to 2002. The (covariance components were obtained by REML. In the animal model for GMDND, the additive genetic direct and maternal and residual effects were considered as random, and the effects of contemporaneous group at weaning (Gc²05, the interaction of the Nellore-Angus breed genetic percentage of the bull and cow (FGNA and the covariables, age of the cow at birth (IV and age at weaning (ID as fixed effects. For GMDDS, the model was the same, except that Gc²05 was substituted by contemporaneous group at 550 days of age (CG550 and ID by age at 550 days. In both models, permanent environmental effect of the cow was considered as a random effect. The heritabilities estimated for direct genetic effects ranged from 0.14 ± 0.03 to 0.21 ± 0.03 and for maternal effects from 0.00 ± 0.01 to 0.15 ± 0.02, the estimates had smaller values when covd-m was included in the model for GMDND. The correlations between genetic direct and maternal effects were negative -0.25 ± 0.12 (GMDND and -0.77 ± 0.19 (GMDDS. The likelihood ratio test showed that there is no significant diference, at 5% significance level, between the adopted models for boths characteristics. The rank correlation between the VG predicted by the two models, were 0.89 for GMDND and 0.98 for GMDND, suggesting that a slight change in the rank of the animals can happen, for GMDND.
Students’ Covariational Reasoning in Solving Integrals’ Problems
Harini, N. V.; Fuad, Y.; Ekawati, R.
2018-01-01
Covariational reasoning plays an important role to indicate quantities vary in learning calculus. This study investigates students’ covariational reasoning during their studies concerning two covarying quantities in integral problem. Six undergraduate students were chosen to solve problems that involved interpreting and representing how quantities change in tandem. Interviews were conducted to reveal the students’ reasoning while solving covariational problems. The result emphasizes that undergraduate students were able to construct the relation of dependent variables that changes in tandem with the independent variable. However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques.
Covariant Quantization with Extended BRST Symmetry
Geyer, B.; Gitman, D. M.; Lavrov, P. M.
1999-01-01
A short rewiev of covariant quantization methods based on BRST-antiBRST symmetry is given. In particular problems of correct definition of Sp(2) symmetric quantization scheme known as triplectic quantization are considered.
Covariant extensions and the nonsymmetric unified field
Borchsenius, K.
1976-01-01
The problem of generally covariant extension of Lorentz invariant field equations, by means of covariant derivatives extracted from the nonsymmetric unified field, is considered. It is shown that the contracted curvature tensor can be expressed in terms of a covariant gauge derivative which contains the gauge derivative corresponding to minimal coupling, if the universal constant p, characterizing the nonsymmetric theory, is fixed in terms of Planck's constant and the elementary quantum of charge. By this choice the spinor representation of the linear connection becomes closely related to the spinor affinity used by Infeld and Van Der Waerden (Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl.; 9:380 (1933)) in their generally covariant formulation of Dirac's equation. (author)
Covariance Spectroscopy for Fissile Material Detection
Trainham, Rusty; Tinsley, Jim; Hurley, Paul; Keegan, Ray
2009-01-01
Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem
Covariant amplitudes in Polyakov string theory
Aoyama, H.; Dhar, A.; Namazie, M.A.
1986-01-01
A manifestly Lorentz-covariant and reparametrization-invariant procedure for computing string amplitudes using Polyakov's formulation is described. Both bosonic and superstring theories are dealt with. The computation of string amplitudes is greatly facilitated by this formalism. (orig.)
Covariance upperbound controllers for networked control systems
Ko, Sang Ho
2012-01-01
This paper deals with designing covariance upperbound controllers for a linear system that can be used in a networked control environment in which control laws are calculated in a remote controller and transmitted through a shared communication link to the plant. In order to compensate for possible packet losses during the transmission, two different techniques are often employed: the zero-input and the hold-input strategy. These use zero input and the latest control input, respectively, when a packet is lost. For each strategy, we synthesize a class of output covariance upperbound controllers for a given covariance upperbound and a packet loss probability. Existence conditions of the covariance upperbound controller are also provided for each strategy. Through numerical examples, performance of the two strategies is compared in terms of feasibility of implementing the controllers
Covariance data evaluation for experimental data
Liu Tingjin
1993-01-01
Some methods and codes have been developed and utilized for covariance data evaluation of experimental data, including parameter analysis, physical analysis, Spline fitting etc.. These methods and codes can be used in many different cases
Earth Observing System Covariance Realism Updates
Ojeda Romero, Juan A.; Miguel, Fred
2017-01-01
This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.
Laser Covariance Vibrometry for Unsymmetrical Mode Detection
Kobold, Michael C
2006-01-01
Simulated cross - spectral covariance (CSC) from optical return from simulated surface vibration indicates CW phase modulation may be an appropriate phenomenology for adequate classification of vehicles by structural mode...
Error Covariance Estimation of Mesoscale Data Assimilation
Xu, Qin
2005-01-01
The goal of this project is to explore and develop new methods of error covariance estimation that will provide necessary statistical descriptions of prediction and observation errors for mesoscale data assimilation...
Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.
Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya
2018-05-05
This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.
Heteroscedasticity resistant robust covariance matrix estimator
Víšek, Jan Ámos
2010-01-01
Roč. 17, č. 27 (2010), s. 33-49 ISSN 1212-074X Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Covariance matrix * Heteroscedasticity * Resistant Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/SI/visek-heteroscedasticity resistant robust covariance matrix estimator.pdf
Phase-covariant quantum cloning of qudits
Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin
2003-01-01
We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation
Noncommutative Gauge Theory with Covariant Star Product
Zet, G.
2010-01-01
We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.
Covariant phase difference observables in quantum mechanics
Heinonen, Teiko; Lahti, Pekka; Pellonpaeae, Juha-Pekka
2003-01-01
Covariant phase difference observables are determined in two different ways, by a direct computation and by a group theoretical method. A characterization of phase difference observables which can be expressed as the difference of two phase observables is given. The classical limits of such phase difference observables are determined and the Pegg-Barnett phase difference distribution is obtained from the phase difference representation. The relation of Ban's theory to the covariant phase theories is exhibited
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Covariant perturbations of Schwarzschild black holes
Clarkson, Chris A; Barrett, Richard K
2003-01-01
We present a new covariant and gauge-invariant perturbation formalism for dealing with spacetimes having spherical symmetry (or some preferred spatial direction) in the background, and apply it to the case of gravitational wave propagation in a Schwarzschild black-hole spacetime. The 1 + 3 covariant approach is extended to a '1 + 1 + 2 covariant sheet' formalism by introducing a radial unit vector in addition to the timelike congruence, and decomposing all covariant quantities with respect to this. The background Schwarzschild solution is discussed and a covariant characterization is given. We give the full first-order system of linearized 1 + 1 + 2 covariant equations, and we show how, by introducing (time and spherical) harmonic functions, these may be reduced to a system of first-order ordinary differential equations and algebraic constraints for the 1 + 1 + 2 variables which may be solved straightforwardly. We show how both odd- and even-parity perturbations may be unified by the discovery of a covariant, frame- and gauge-invariant, transverse-traceless tensor describing gravitational waves, which satisfies a covariant wave equation equivalent to the Regge-Wheeler equation for both even- and odd-parity perturbations. We show how the Zerilli equation may be derived from this tensor, and derive a similar transverse-traceless tensor equation equivalent to this equation. The so-called special quasinormal modes with purely imaginary frequency emerge naturally. The significance of the degrees of freedom in the choice of the two frame vectors is discussed, and we demonstrate that, for a certain frame choice, the underlying dynamics is governed purely by the Regge-Wheeler tensor. The two transverse-traceless Weyl tensors which carry the curvature of gravitational waves are discussed, and we give the closed system of four first-order ordinary differential equations describing their propagation. Finally, we consider the extension of this work to the study of
An alternative covariance estimator to investigate genetic heterogeneity in populations.
Heslot, Nicolas; Jannink, Jean-Luc
2015-11-26
For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative
Lagishetty, Chakradhar V; Duffull, Stephen B
2015-11-01
Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.
Ole E. Barndorff-Nielsen; Neil Shephard
2002-01-01
This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...
Yoneoka, Daisuke; Henmi, Masayuki
2017-11-30
Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.
Apanasovich, Tatiyana V.; Genton, Marc G.; Sun, Ying
2012-01-01
We introduce a valid parametric family of cross-covariance functions for multivariate spatial random fields where each component has a covariance function from a well-celebrated Matérn class. Unlike previous attempts, our model indeed allows
Estimation of Fuzzy Measures Using Covariance Matrices in Gaussian Mixtures
Nishchal K. Verma
2012-01-01
Full Text Available This paper presents a novel computational approach for estimating fuzzy measures directly from Gaussian mixtures model (GMM. The mixture components of GMM provide the membership functions for the input-output fuzzy sets. By treating consequent part as a function of fuzzy measures, we derived its coefficients from the covariance matrices found directly from GMM and the defuzzified output constructed from both the premise and consequent parts of the nonadditive fuzzy rules that takes the form of Choquet integral. The computational burden involved with the solution of λ-measure is minimized using Q-measure. The fuzzy model whose fuzzy measures were computed using covariance matrices found in GMM has been successfully applied on two benchmark problems and one real-time electric load data of Indian utility. The performance of the resulting model for many experimental studies including the above-mentioned application is found to be better and comparable to recent available fuzzy models. The main contribution of this paper is the estimation of fuzzy measures efficiently and directly from covariance matrices found in GMM, avoiding the computational burden greatly while learning them iteratively and solving polynomial equations of order of the number of input-output variables.
Do Time-Varying Covariances, Volatility Comovement and Spillover Matter?
Lakshmi Balasubramanyan
2005-01-01
Financial markets and their respective assets are so intertwined; analyzing any single market in isolation ignores important information. We investigate whether time varying volatility comovement and spillover impact the true variance-covariance matrix under a time-varying correlation set up. Statistically significant volatility spillover and comovement between US, UK and Japan is found. To demonstrate the importance of modelling volatility comovement and spillover, we look at a simple portfo...
Treatment of Nuclear Data Covariance Information in Sample Generation
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wieselquist, William [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division
2017-10-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
Treatment of Nuclear Data Covariance Information in Sample Generation
Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William
2017-01-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
Evaluating dynamic covariance matrix forecasting and portfolio optimization
Sendstad, Lars Hegnes; Holten, Dag Martin
2012-01-01
In this thesis we have evaluated the covariance forecasting ability of the simple moving average, the exponential moving average and the dynamic conditional correlation models. Overall we found that a dynamic portfolio can gain significant improvements by implementing a multivariate GARCH forecast. We further divided the global investment universe into sectors and regions in order to investigate the relative portfolio performance of several asset allocation strategies with both variance and c...
Metzger, Stefan; Durden, David; Sturtevant, Cove; Luo, Hongyan; Pingintha-Durden, Natchaya; Sachs, Torsten; Serafimovich, Andrei; Hartmann, Jörg; Li, Jiahong; Xu, Ke; Desai, Ankur R.
2017-08-01
Large differences in instrumentation, site setup, data format, and operating system stymie the adoption of a universal computational environment for processing and analyzing eddy-covariance (EC) data. This results in limited software applicability and extensibility in addition to often substantial inconsistencies in flux estimates. Addressing these concerns, this paper presents the systematic development of portable, reproducible, and extensible EC software achieved by adopting a development and systems operation (DevOps) approach. This software development model is used for the creation of the eddy4R family of EC code packages in the open-source R language for statistical computing. These packages are community developed, iterated via the Git distributed version control system, and wrapped into a portable and reproducible Docker filesystem that is independent of the underlying host operating system. The HDF5 hierarchical data format then provides a streamlined mechanism for highly compressed and fully self-documented data ingest and output. The usefulness of the DevOps approach was evaluated for three test applications. First, the resultant EC processing software was used to analyze standard flux tower data from the first EC instruments installed at a National Ecological Observatory (NEON) field site. Second, through an aircraft test application, we demonstrate the modular extensibility of eddy4R to analyze EC data from other platforms. Third, an intercomparison with commercial-grade software showed excellent agreement (R2 = 1.0 for CO2 flux). In conjunction with this study, a Docker image containing the first two eddy4R packages and an executable example workflow, as well as first NEON EC data products are released publicly. We conclude by describing the work remaining to arrive at the automated generation of science-grade EC fluxes and benefits to the science community at large. This software development model is applicable beyond EC and more generally builds
S. Metzger
2017-08-01
Full Text Available Large differences in instrumentation, site setup, data format, and operating system stymie the adoption of a universal computational environment for processing and analyzing eddy-covariance (EC data. This results in limited software applicability and extensibility in addition to often substantial inconsistencies in flux estimates. Addressing these concerns, this paper presents the systematic development of portable, reproducible, and extensible EC software achieved by adopting a development and systems operation (DevOps approach. This software development model is used for the creation of the eddy4R family of EC code packages in the open-source R language for statistical computing. These packages are community developed, iterated via the Git distributed version control system, and wrapped into a portable and reproducible Docker filesystem that is independent of the underlying host operating system. The HDF5 hierarchical data format then provides a streamlined mechanism for highly compressed and fully self-documented data ingest and output. The usefulness of the DevOps approach was evaluated for three test applications. First, the resultant EC processing software was used to analyze standard flux tower data from the first EC instruments installed at a National Ecological Observatory (NEON field site. Second, through an aircraft test application, we demonstrate the modular extensibility of eddy4R to analyze EC data from other platforms. Third, an intercomparison with commercial-grade software showed excellent agreement (R2 = 1.0 for CO2 flux. In conjunction with this study, a Docker image containing the first two eddy4R packages and an executable example workflow, as well as first NEON EC data products are released publicly. We conclude by describing the work remaining to arrive at the automated generation of science-grade EC fluxes and benefits to the science community at large. This software development model is applicable beyond EC
Cortisol covariation within parents of young children: Moderation by relationship aggression.
Saxbe, Darby E; Adam, Emma K; Schetter, Christine Dunkel; Guardino, Christine M; Simon, Clarissa; McKinney, Chelsea O; Shalowitz, Madeleine U
2015-12-01
Covariation in diurnal cortisol has been observed in several studies of cohabiting couples. In two such studies (Liu et al., 2013; Saxbe and Repetti, 2010), relationship distress was associated with stronger within-couple correlations, suggesting that couples' physiological linkage with each other may indicate problematic dyadic functioning. Although intimate partner aggression has been associated with dysregulation in women's diurnal cortisol, it has not yet been tested as a moderator of within-couple covariation. This study reports on a diverse sample of 122 parents who sampled salivary cortisol on matched days for two years following the birth of an infant. Partners showed strong positive cortisol covariation. In couples with higher levels of partner-perpetrated aggression reported by women at one year postpartum, both women and men had a flatter diurnal decrease in cortisol and stronger correlations with partners' cortisol sampled at the same timepoints. In other words, relationship aggression was linked both with indices of suboptimal cortisol rhythms in both members of the couples and with stronger within-couple covariation coefficients. These results persisted when relationship satisfaction and demographic covariates were included in the model. During some of the sampling days, some women were pregnant with a subsequent child, but pregnancy did not significantly moderate cortisol levels or within-couple covariation. The findings suggest that couples experiencing relationship aggression have both suboptimal neuroendocrine profiles and stronger covariation. Cortisol covariation is an understudied phenomenon with potential implications for couples' relationship functioning and physical health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimation of covariances of Cr and Ni neutron nuclear data in JENDL-3.2
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Oh, Soo Youl [Korea Atomic Energy Research Institute, Taejon (Korea)
2000-02-01
Covariances of nuclear data have been estimated for 2 nuclides contained in JENDL-3.2. The nuclides considered are Cr and Ni, which are regarded as important for the nuclear design study of fast reactors. The physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. The least-squares fitting code GMA was used in estimating covariances for reactions of which JENDL-3.2 cross sections had been evaluated by taking account of measurements. Covariances of nuclear model calculations were deduced by using the KALMAN system. The covariance data obtained were compiled in the ENDF-6 format, and will be put into the JENDL-3.2 Covariance File which is one of JENDL special purpose files. (author)
Nuclear data covariances in the Indian context
Ganesan, S.
2014-01-01
The topic of covariances is recognized as an important part of several ongoing nuclear data science activities, since 2007, in the Nuclear Data Physics Centre of India (NDPCI). A Phase-1 project in collaboration with the Statistics department in Manipal University, Karnataka (Prof. K.M. Prasad and Prof. S. Nair) on nuclear data covariances was executed successfully during 2007-2011 period. In Phase-I, the NDPCI has conducted three national Theme meetings sponsored by the DAE-BRNS in 2008, 2010 and 2013 on nuclear data covariances. In Phase-1, the emphasis was on a thorough basic understanding of the concept of covariances including assigning uncertainties to experimental data in terms of partial errors and micro correlations, through a study and a detailed discussion of open literature. Towards the end of Phase-1, measurements and a first time covariance analysis of cross-sections for 58 Ni (n, p) 58 Co reaction measured in Mumbai Pelletron accelerator using 7 Li (p,n) reactions as neutron source in the MeV energy region were performed under a PhD programme on nuclear data covariances in which enrolled are two students, Shri B.S. Shivashankar and Ms. Shanti Sheela. India is also successfully evolving a team of young researchers to code nuclear data of uncertainties, with the perspectives on covariances, in the IAEA-EXFOR format. A Phase-II DAE-BRNS-NDPCI proposal of project at Manipal has been submitted and the proposal is undergoing a peer-review at this time. In Phase-2, modern nuclear data evaluation techniques that including covariances will be further studied as a research and development effort, as a first time effort. These efforts include the use of techniques such as that of the Kalman filter. Presently, a 48 hours lecture series on treatment of errors and their propagation is being formulated under auspices of the Homi Bhabha National Institute. The talk describes the progress achieved thus far in the learning curve of the above-mentioned and exciting
Do current cosmological observations rule out all covariant Galileons?
Peirone, Simone; Frusciante, Noemi; Hu, Bin; Raveri, Marco; Silvestri, Alessandra
2018-03-01
We revisit the cosmology of covariant Galileon gravity in view of the most recent cosmological data sets, including weak lensing. As a higher derivative theory, covariant Galileon models do not have a Λ CDM limit and predict a very different structure formation pattern compared with the standard Λ CDM scenario. Previous cosmological analyses suggest that this model is marginally disfavored, yet cannot be completely ruled out. In this work we use a more recent and extended combination of data, and we allow for more freedom in the cosmology, by including a massive neutrino sector with three different mass hierarchies. We use the Planck measurements of cosmic microwave background temperature and polarization; baryonic acoustic oscillations measurements by BOSS DR12; local measurements of H0; the joint light-curve analysis supernovae sample; and, for the first time, weak gravitational lensing from the KiDS Collaboration. We find, that in order to provide a reasonable fit, a nonzero neutrino mass is indeed necessary, but we do not report any sizable difference among the three neutrino hierarchies. Finally, the comparison of the Bayesian evidence to the Λ CDM one shows that in all the cases considered, covariant Galileon models are statistically ruled out by cosmological data.
Supersymmetric gauged scale covariance in ten and lower dimensions
Nishino, Hitoshi; Rajpoot, Subhash
2004-01-01
We present globally supersymmetric models of gauged scale covariance in ten, six, and four dimensions. This is an application of a recent similar gauging in three dimensions for a massive self-dual vector multiplet. In ten dimensions, we couple a single vector multiplet to another vector multiplet, where the latter gauges the scale covariance of the former. Due to scale covariance, the system does not have a Lagrangian formulation, but has only a set of field equations, like Type IIB supergravity in ten dimensions. As by-products, we construct similar models in six dimensions with N=(2,0) supersymmetry, and four dimensions with N=1 supersymmetry. We finally get a similar model with N=4 supersymmetry in four dimensions with consistent interactions that have never been known before. We expect a series of descendant theories in dimensions lower than ten by dimensional reductions. This result also indicates that similar mechanisms will work for other vector and scalar multiplets in space-time lower than ten dimensions
Graph Sampling for Covariance Estimation
Chepuri, Sundeep Prabhakar
2017-04-25
In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.
Schroedinger covariance states in anisotropic waveguides
Angelow, A.; Trifonov, D.
1995-03-01
In this paper Squeezed and Covariance States based on Schroedinger inequality and their connection with other nonclassical states are considered for particular case of anisotropic waveguide in LiNiO 3 . Here, the problem of photon creation and generation of squeezed and Schroedinger covariance states in optical waveguides is solved in two steps: 1. Quantization of electromagnetic field is provided in the presence of dielectric waveguide using normal-mode expansion. The photon creation and annihilation operators are introduced, expanding the solution A-vector(r-vector,t) in a series in terms of the Sturm - Liouville mode-functions. 2. In terms of these operators the Hamiltonian of the field in a nonlinear waveguide is derived. For such Hamiltonian we construct the covariance states as stable (with nonzero covariance), which minimize the Schroedinger uncertainty relation. The evolutions of the three second momenta of q-circumflex j and p-circumflex j are calculated. For this Hamiltonian all three momenta are expressed in terms of one real parameters s only. It is found out how covariance, via this parameter s, depends on the waveguide profile n(x,y), on the mode-distributions u-vector j (x,y), and on the waveguide phase mismatching Δβ. (author). 37 refs